Test Report: Hyperkit_macOS 19423

                    
                      1f2c26fb323282b69eee479fdee82bbe44410c3d:2024-08-16:35811
                    
                

Test fail (13/322)

Order failed test Duration
22 TestOffline 195.33
46 TestCertOptions 251.84
47 TestCertExpiration 1752.48
48 TestDockerFlags 252.08
49 TestForceSystemdFlag 252.03
50 TestForceSystemdEnv 233.67
175 TestMultiControlPlane/serial/RestartCluster 179.51
177 TestMultiControlPlane/serial/AddSecondaryNode 75.64
221 TestMountStart/serial/StartWithMountFirst 136.87
236 TestMultiNode/serial/RestartMultiNode 138.11
243 TestScheduledStopUnix 141.96
267 TestPause/serial/Start 139.08
307 TestNetworkPlugins/group/false/Start 76.89
x
+
TestOffline (195.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-266000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-266000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.920256352s)

                                                
                                                
-- stdout --
	* [offline-docker-266000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-266000" primary control-plane node in "offline-docker-266000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-266000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:10:20.804983    5252 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:10:20.805281    5252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:10:20.805288    5252 out.go:358] Setting ErrFile to fd 2...
	I0816 06:10:20.805292    5252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:10:20.805475    5252 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:10:20.836324    5252 out.go:352] Setting JSON to false
	I0816 06:10:20.861144    5252 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3598,"bootTime":1723810222,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:10:20.861239    5252 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:10:20.923365    5252 out.go:177] * [offline-docker-266000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:10:20.966369    5252 notify.go:220] Checking for updates...
	I0816 06:10:20.991365    5252 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:10:21.054190    5252 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:10:21.075276    5252 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:10:21.097233    5252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:10:21.118207    5252 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:10:21.139222    5252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:10:21.160448    5252 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:10:21.189585    5252 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 06:10:21.211338    5252 start.go:297] selected driver: hyperkit
	I0816 06:10:21.211375    5252 start.go:901] validating driver "hyperkit" against <nil>
	I0816 06:10:21.211443    5252 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:10:21.215916    5252 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:10:21.216079    5252 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:10:21.224395    5252 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:10:21.228055    5252 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:10:21.228075    5252 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:10:21.228108    5252 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 06:10:21.228319    5252 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:10:21.228352    5252 cni.go:84] Creating CNI manager for ""
	I0816 06:10:21.228367    5252 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 06:10:21.228372    5252 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 06:10:21.228434    5252 start.go:340] cluster config:
	{Name:offline-docker-266000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:10:21.228523    5252 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:10:21.275388    5252 out.go:177] * Starting "offline-docker-266000" primary control-plane node in "offline-docker-266000" cluster
	I0816 06:10:21.296393    5252 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:10:21.296519    5252 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:10:21.296542    5252 cache.go:56] Caching tarball of preloaded images
	I0816 06:10:21.296769    5252 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:10:21.296789    5252 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:10:21.297307    5252 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/offline-docker-266000/config.json ...
	I0816 06:10:21.297351    5252 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/offline-docker-266000/config.json: {Name:mkf57e7b6f1ba7b3c8c2611fc7aa4dc7e83aa372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:10:21.318102    5252 start.go:360] acquireMachinesLock for offline-docker-266000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:10:21.318301    5252 start.go:364] duration metric: took 153.365µs to acquireMachinesLock for "offline-docker-266000"
	I0816 06:10:21.318356    5252 start.go:93] Provisioning new machine with config: &{Name:offline-docker-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:10:21.318455    5252 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:10:21.360133    5252 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:10:21.360275    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:10:21.360313    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:10:21.369099    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53703
	I0816 06:10:21.369447    5252 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:10:21.369874    5252 main.go:141] libmachine: Using API Version  1
	I0816 06:10:21.369891    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:10:21.370108    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:10:21.370213    5252 main.go:141] libmachine: (offline-docker-266000) Calling .GetMachineName
	I0816 06:10:21.370305    5252 main.go:141] libmachine: (offline-docker-266000) Calling .DriverName
	I0816 06:10:21.370402    5252 start.go:159] libmachine.API.Create for "offline-docker-266000" (driver="hyperkit")
	I0816 06:10:21.370443    5252 client.go:168] LocalClient.Create starting
	I0816 06:10:21.370475    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:10:21.370524    5252 main.go:141] libmachine: Decoding PEM data...
	I0816 06:10:21.370538    5252 main.go:141] libmachine: Parsing certificate...
	I0816 06:10:21.370627    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:10:21.370666    5252 main.go:141] libmachine: Decoding PEM data...
	I0816 06:10:21.370679    5252 main.go:141] libmachine: Parsing certificate...
	I0816 06:10:21.370697    5252 main.go:141] libmachine: Running pre-create checks...
	I0816 06:10:21.370703    5252 main.go:141] libmachine: (offline-docker-266000) Calling .PreCreateCheck
	I0816 06:10:21.370779    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:21.370933    5252 main.go:141] libmachine: (offline-docker-266000) Calling .GetConfigRaw
	I0816 06:10:21.371382    5252 main.go:141] libmachine: Creating machine...
	I0816 06:10:21.371393    5252 main.go:141] libmachine: (offline-docker-266000) Calling .Create
	I0816 06:10:21.371481    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:21.371610    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:10:21.371481    5273 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:10:21.371679    5252 main.go:141] libmachine: (offline-docker-266000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:10:21.843152    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:10:21.843067    5273 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/id_rsa...
	I0816 06:10:21.957080    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:10:21.957017    5273 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk...
	I0816 06:10:21.957094    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Writing magic tar header
	I0816 06:10:21.957175    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Writing SSH key tar header
	I0816 06:10:21.957555    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:10:21.957513    5273 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000 ...
	I0816 06:10:22.441904    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:22.441928    5252 main.go:141] libmachine: (offline-docker-266000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid
	I0816 06:10:22.441943    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Using UUID cf7bbe00-8ced-49bc-b9ae-192c3acf4f4d
	I0816 06:10:22.606889    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Generated MAC 7e:f4:ee:e8:a4:2d
	I0816 06:10:22.606913    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000
	I0816 06:10:22.607001    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cf7bbe00-8ced-49bc-b9ae-192c3acf4f4d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 06:10:22.607052    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"cf7bbe00-8ced-49bc-b9ae-192c3acf4f4d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 06:10:22.607106    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "cf7bbe00-8ced-49bc-b9ae-192c3acf4f4d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage,
/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000"}
	I0816 06:10:22.607149    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U cf7bbe00-8ced-49bc-b9ae-192c3acf4f4d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machi
nes/offline-docker-266000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000"
	I0816 06:10:22.607158    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:10:22.610029    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 DEBUG: hyperkit: Pid is 5298
	I0816 06:10:22.610434    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 0
	I0816 06:10:22.610447    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:22.610545    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:22.611387    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:22.611506    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:22.611523    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:22.611558    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:22.611585    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:22.611598    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:22.611610    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:22.611624    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:22.611637    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:22.611651    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:22.611666    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:22.611682    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:22.611705    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:22.611719    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:22.611729    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:22.611739    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:22.611746    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:22.611754    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:22.611762    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:22.611778    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:22.617891    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:10:22.746828    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:10:22.747450    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:10:22.747465    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:10:22.747472    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:10:22.747478    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:10:23.126205    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:10:23.126220    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:10:23.241208    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:10:23.241237    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:10:23.241270    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:10:23.241318    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:10:23.242212    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:10:23.242229    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:10:24.612233    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 1
	I0816 06:10:24.612247    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:24.612364    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:24.613190    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:24.613250    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:24.613265    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:24.613293    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:24.613309    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:24.613337    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:24.613352    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:24.613363    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:24.613377    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:24.613429    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:24.613448    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:24.613462    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:24.613503    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:24.613537    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:24.613552    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:24.613566    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:24.613579    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:24.613595    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:24.613608    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:24.613621    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:26.613906    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 2
	I0816 06:10:26.613936    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:26.614064    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:26.614848    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:26.614912    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:26.614925    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:26.614938    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:26.614949    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:26.614956    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:26.614965    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:26.614972    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:26.614982    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:26.614992    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:26.615000    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:26.615008    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:26.615017    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:26.615023    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:26.615032    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:26.615039    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:26.615046    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:26.615061    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:26.615069    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:26.615078    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:28.616063    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 3
	I0816 06:10:28.616082    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:28.616146    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:28.616962    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:28.617023    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:28.617033    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:28.617045    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:28.617054    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:28.617073    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:28.617080    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:28.617087    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:28.617093    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:28.617101    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:28.617122    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:28.617133    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:28.617142    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:28.617158    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:28.617171    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:28.617188    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:28.617198    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:28.617206    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:28.617214    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:28.617228    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:28.641741    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:10:28.641881    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:10:28.641892    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:10:28.661795    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:10:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:10:30.618561    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 4
	I0816 06:10:30.618582    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:30.618703    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:30.619519    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:30.619580    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:30.619590    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:30.619600    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:30.619609    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:30.619616    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:30.619633    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:30.619645    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:30.619655    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:30.619665    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:30.619686    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:30.619698    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:30.619709    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:30.619724    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:30.619732    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:30.619744    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:30.619753    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:30.619764    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:30.619782    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:30.619795    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:32.620620    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 5
	I0816 06:10:32.620635    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:32.620703    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:32.621473    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:32.621534    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:32.621548    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:32.621564    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:32.621576    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:32.621586    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:32.621593    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:32.621599    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:32.621617    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:32.621625    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:32.621639    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:32.621660    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:32.621667    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:32.621674    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:32.621683    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:32.621694    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:32.621704    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:32.621718    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:32.621726    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:32.621734    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:34.623550    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 6
	I0816 06:10:34.623562    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:34.623666    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:34.624594    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:34.624641    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:34.624651    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:34.624660    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:34.624666    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:34.624673    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:34.624678    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:34.624684    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:34.624690    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:34.624707    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:34.624716    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:34.624723    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:34.624732    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:34.624740    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:34.624746    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:34.624752    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:34.624759    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:34.624767    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:34.624777    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:34.624786    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:36.625495    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 7
	I0816 06:10:36.625509    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:36.625609    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:36.626382    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:36.626443    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:36.626455    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:36.626470    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:36.626476    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:36.626493    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:36.626508    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:36.626539    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:36.626549    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:36.626557    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:36.626567    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:36.626574    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:36.626583    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:36.626593    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:36.626602    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:36.626614    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:36.626622    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:36.626629    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:36.626638    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:36.626646    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:38.628480    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 8
	I0816 06:10:38.628494    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:38.628555    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:38.629310    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:38.629362    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:38.629370    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:38.629395    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:38.629410    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:38.629417    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:38.629433    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:38.629445    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:38.629452    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:38.629461    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:38.629473    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:38.629483    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:38.629501    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:38.629513    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:38.629533    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:38.629543    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:38.629551    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:38.629559    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:38.629568    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:38.629576    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:40.630098    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 9
	I0816 06:10:40.630129    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:40.630225    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:40.630993    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:40.631046    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:40.631055    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:40.631064    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:40.631079    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:40.631092    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:40.631102    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:40.631110    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:40.631116    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:40.631122    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:40.631137    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:40.631151    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:40.631159    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:40.631169    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:40.631176    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:40.631182    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:40.631189    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:40.631195    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:40.631203    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:40.631210    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:42.631383    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 10
	I0816 06:10:42.631398    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:42.631451    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:42.632273    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:42.632300    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:42.632319    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:42.632328    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:42.632334    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:42.632340    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:42.632353    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:42.632370    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:42.632378    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:42.632384    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:42.632420    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:42.632433    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:42.632443    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:42.632451    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:42.632459    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:42.632467    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:42.632477    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:42.632483    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:42.632490    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:42.632497    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:44.632654    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 11
	I0816 06:10:44.632669    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:44.632769    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:44.633546    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:44.633617    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:44.633625    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:44.633635    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:44.633653    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:44.633667    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:44.633679    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:44.633689    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:44.633697    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:44.633704    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:44.633714    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:44.633724    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:44.633732    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:44.633741    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:44.633749    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:44.633758    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:44.633765    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:44.633774    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:44.633787    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:44.633799    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:46.634187    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 12
	I0816 06:10:46.634203    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:46.634283    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:46.635077    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:46.635140    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:46.635183    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:46.635197    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:46.635211    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:46.635219    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:46.635227    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:46.635233    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:46.635248    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:46.635259    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:46.635273    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:46.635283    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:46.635291    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:46.635298    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:46.635308    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:46.635315    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:46.635323    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:46.635347    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:46.635355    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:46.635363    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:48.637295    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 13
	I0816 06:10:48.637310    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:48.637367    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:48.638209    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:48.638249    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:48.638256    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:48.638276    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:48.638289    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:48.638309    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:48.638322    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:48.638330    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:48.638337    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:48.638360    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:48.638373    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:48.638382    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:48.638391    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:48.638397    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:48.638404    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:48.638411    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:48.638423    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:48.638434    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:48.638455    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:48.638472    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:50.638621    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 14
	I0816 06:10:50.638633    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:50.638691    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:50.639477    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:50.639531    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:50.639552    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:50.639561    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:50.639571    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:50.639580    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:50.639596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:50.639608    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:50.639619    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:50.639628    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:50.639636    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:50.639645    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:50.639653    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:50.639666    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:50.639681    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:50.639689    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:50.639696    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:50.639706    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:50.639712    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:50.639729    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:52.641662    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 15
	I0816 06:10:52.641678    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:52.641718    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:52.642562    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:52.642623    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:52.642634    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:52.642643    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:52.642649    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:52.642668    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:52.642677    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:52.642688    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:52.642696    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:52.642714    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:52.642728    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:52.642737    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:52.642753    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:52.642771    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:52.642780    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:52.642787    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:52.642795    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:52.642808    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:52.642817    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:52.642825    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:54.643688    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 16
	I0816 06:10:54.643701    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:54.643831    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:54.644611    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:54.644658    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:54.644674    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:54.644695    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:54.644708    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:54.644716    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:54.644722    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:54.644737    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:54.644752    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:54.644764    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:54.644773    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:54.644793    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:54.644803    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:54.644811    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:54.644825    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:54.644834    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:54.644847    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:54.644855    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:54.644862    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:54.644868    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:56.646810    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 17
	I0816 06:10:56.646823    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:56.646882    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:56.647703    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:56.647748    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:56.647756    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:56.647764    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:56.647771    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:56.647776    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:56.647790    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:56.647801    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:56.647808    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:56.647814    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:56.647838    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:56.647850    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:56.647856    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:56.647870    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:56.647882    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:56.647890    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:56.647898    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:56.647905    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:56.647917    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:56.647926    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:10:58.648987    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 18
	I0816 06:10:58.649000    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:10:58.649073    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:10:58.649962    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:10:58.650002    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:10:58.650010    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:10:58.650030    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:10:58.650039    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:10:58.650046    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:10:58.650052    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:10:58.650059    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:10:58.650067    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:10:58.650077    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:10:58.650085    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:10:58.650094    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:10:58.650102    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:10:58.650109    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:10:58.650117    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:10:58.650133    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:10:58.650146    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:10:58.650162    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:10:58.650171    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:10:58.650188    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:00.652162    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 19
	I0816 06:11:00.652175    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:00.652226    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:00.653008    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:00.653063    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:00.653075    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:00.653083    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:00.653100    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:00.653107    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:00.653114    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:00.653121    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:00.653140    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:00.653150    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:00.653160    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:00.653167    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:00.653175    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:00.653190    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:00.653202    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:00.653210    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:00.653225    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:00.653234    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:00.653243    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:00.653257    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:02.654482    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 20
	I0816 06:11:02.654493    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:02.654556    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:02.655398    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:02.655440    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:02.655450    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:02.655463    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:02.655474    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:02.655491    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:02.655498    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:02.655524    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:02.655541    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:02.655552    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:02.655561    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:02.655570    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:02.655580    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:02.655588    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:02.655596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:02.655603    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:02.655615    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:02.655626    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:02.655635    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:02.655642    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:04.657588    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 21
	I0816 06:11:04.657602    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:04.657688    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:04.658486    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:04.658537    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:04.658550    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:04.658574    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:04.658586    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:04.658596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:04.658614    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:04.658628    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:04.658641    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:04.658649    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:04.658660    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:04.658668    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:04.658675    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:04.658689    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:04.658707    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:04.658716    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:04.658722    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:04.658740    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:04.658753    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:04.658762    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:06.658745    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 22
	I0816 06:11:06.658760    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:06.658832    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:06.659726    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:06.659779    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:06.659791    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:06.659798    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:06.659804    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:06.659810    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:06.659817    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:06.659831    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:06.659839    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:06.659846    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:06.659855    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:06.659863    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:06.659872    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:06.659879    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:06.659887    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:06.659903    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:06.659915    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:06.659923    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:06.659931    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:06.659940    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:08.661691    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 23
	I0816 06:11:08.661707    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:08.661786    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:08.662678    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:08.662725    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:08.662733    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:08.662741    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:08.662749    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:08.662755    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:08.662765    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:08.662772    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:08.662778    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:08.662784    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:08.662790    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:08.662797    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:08.662805    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:08.662811    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:08.662818    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:08.662826    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:08.662833    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:08.662848    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:08.662863    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:08.662877    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:10.663976    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 24
	I0816 06:11:10.663991    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:10.664039    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:10.665024    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:10.665069    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:10.665084    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:10.665100    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:10.665111    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:10.665119    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:10.665128    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:10.665135    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:10.665142    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:10.665150    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:10.665174    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:10.665184    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:10.665200    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:10.665212    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:10.665221    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:10.665232    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:10.665241    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:10.665248    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:10.665260    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:10.665278    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:12.666326    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 25
	I0816 06:11:12.666338    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:12.666386    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:12.667201    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:12.667242    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:12.667256    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:12.667274    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:12.667285    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:12.667292    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:12.667306    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:12.667313    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:12.667322    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:12.667331    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:12.667339    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:12.667349    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:12.667358    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:12.667366    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:12.667375    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:12.667381    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:12.667389    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:12.667396    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:12.667402    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:12.667412    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:14.669490    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 26
	I0816 06:11:14.669503    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:14.669534    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:14.670360    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:14.670398    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:14.670407    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:14.670417    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:14.670422    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:14.670436    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:14.670457    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:14.670471    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:14.670481    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:14.670496    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:14.670505    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:14.670514    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:14.670522    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:14.670537    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:14.670551    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:14.670560    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:14.670573    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:14.670586    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:14.670596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:14.670604    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:16.672526    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 27
	I0816 06:11:16.672540    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:16.672598    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:16.673668    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:16.673705    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:16.673716    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:16.673726    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:16.673743    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:16.673756    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:16.673766    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:16.673773    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:16.673780    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:16.673796    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:16.673806    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:16.673820    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:16.673830    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:16.673838    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:16.673844    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:16.673851    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:16.673859    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:16.673873    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:16.673886    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:16.673896    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:18.675880    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 28
	I0816 06:11:18.675891    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:18.675943    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:18.676763    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:18.676806    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:18.676815    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:18.676836    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:18.676849    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:18.676862    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:18.676871    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:18.676885    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:18.676897    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:18.676914    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:18.676923    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:18.676930    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:18.676938    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:18.676945    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:18.676960    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:18.676976    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:18.676987    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:18.676999    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:18.677009    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:18.677017    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:20.678959    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 29
	I0816 06:11:20.678985    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:20.679030    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:20.679887    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for 7e:f4:ee:e8:a4:2d in /var/db/dhcpd_leases ...
	I0816 06:11:20.679938    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:20.679950    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:20.679959    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:20.679967    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:20.679983    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:20.679990    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:20.679998    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:20.680006    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:20.680013    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:20.680033    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:20.680061    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:20.680073    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:20.680081    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:20.680089    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:20.680096    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:20.680104    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:20.680111    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:20.680117    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:20.680126    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:22.680411    5252 client.go:171] duration metric: took 1m1.311597031s to LocalClient.Create
	I0816 06:11:24.682017    5252 start.go:128] duration metric: took 1m3.365243178s to createHost
	I0816 06:11:24.682031    5252 start.go:83] releasing machines lock for "offline-docker-266000", held for 1m3.365411388s
	W0816 06:11:24.682047    5252 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:f4:ee:e8:a4:2d
	I0816 06:11:24.682355    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:11:24.682379    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:11:24.692170    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53743
	I0816 06:11:24.692590    5252 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:11:24.693064    5252 main.go:141] libmachine: Using API Version  1
	I0816 06:11:24.693077    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:11:24.693304    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:11:24.693717    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:11:24.693758    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:11:24.702312    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53745
	I0816 06:11:24.702761    5252 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:11:24.703279    5252 main.go:141] libmachine: Using API Version  1
	I0816 06:11:24.703313    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:11:24.703555    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:11:24.703802    5252 main.go:141] libmachine: (offline-docker-266000) Calling .GetState
	I0816 06:11:24.703894    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.703975    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:24.704973    5252 main.go:141] libmachine: (offline-docker-266000) Calling .DriverName
	I0816 06:11:24.726251    5252 out.go:177] * Deleting "offline-docker-266000" in hyperkit ...
	I0816 06:11:24.747411    5252 main.go:141] libmachine: (offline-docker-266000) Calling .Remove
	I0816 06:11:24.747545    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.747555    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.747624    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:24.748577    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.748608    5252 main.go:141] libmachine: (offline-docker-266000) DBG | waiting for graceful shutdown
	I0816 06:11:25.750704    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:25.750842    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:25.751795    5252 main.go:141] libmachine: (offline-docker-266000) DBG | waiting for graceful shutdown
	I0816 06:11:26.752161    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:26.752236    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:26.753850    5252 main.go:141] libmachine: (offline-docker-266000) DBG | waiting for graceful shutdown
	I0816 06:11:27.754256    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:27.754347    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:27.755243    5252 main.go:141] libmachine: (offline-docker-266000) DBG | waiting for graceful shutdown
	I0816 06:11:28.756690    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:28.756708    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:28.757285    5252 main.go:141] libmachine: (offline-docker-266000) DBG | waiting for graceful shutdown
	I0816 06:11:29.757407    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:29.757513    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5298
	I0816 06:11:29.758571    5252 main.go:141] libmachine: (offline-docker-266000) DBG | sending sigkill
	I0816 06:11:29.758584    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 06:11:29.770175    5252 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:f4:ee:e8:a4:2d
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:f4:ee:e8:a4:2d
	I0816 06:11:29.770188    5252 start.go:729] Will try again in 5 seconds ...
	I0816 06:11:29.788378    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:11:29 WARN : hyperkit: failed to read stdout: EOF
	I0816 06:11:29.788396    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:11:29 WARN : hyperkit: failed to read stderr: EOF
	I0816 06:11:34.772164    5252 start.go:360] acquireMachinesLock for offline-docker-266000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:12:27.539368    5252 start.go:364] duration metric: took 52.768574566s to acquireMachinesLock for "offline-docker-266000"
	I0816 06:12:27.539401    5252 start.go:93] Provisioning new machine with config: &{Name:offline-docker-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:12:27.539451    5252 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:12:27.560917    5252 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:12:27.560990    5252 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:12:27.561084    5252 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:12:27.570028    5252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53753
	I0816 06:12:27.570510    5252 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:12:27.570985    5252 main.go:141] libmachine: Using API Version  1
	I0816 06:12:27.571019    5252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:12:27.571266    5252 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:12:27.571380    5252 main.go:141] libmachine: (offline-docker-266000) Calling .GetMachineName
	I0816 06:12:27.571489    5252 main.go:141] libmachine: (offline-docker-266000) Calling .DriverName
	I0816 06:12:27.571613    5252 start.go:159] libmachine.API.Create for "offline-docker-266000" (driver="hyperkit")
	I0816 06:12:27.571633    5252 client.go:168] LocalClient.Create starting
	I0816 06:12:27.571663    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:12:27.571714    5252 main.go:141] libmachine: Decoding PEM data...
	I0816 06:12:27.571725    5252 main.go:141] libmachine: Parsing certificate...
	I0816 06:12:27.571771    5252 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:12:27.571810    5252 main.go:141] libmachine: Decoding PEM data...
	I0816 06:12:27.571821    5252 main.go:141] libmachine: Parsing certificate...
	I0816 06:12:27.571835    5252 main.go:141] libmachine: Running pre-create checks...
	I0816 06:12:27.571841    5252 main.go:141] libmachine: (offline-docker-266000) Calling .PreCreateCheck
	I0816 06:12:27.571920    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.571940    5252 main.go:141] libmachine: (offline-docker-266000) Calling .GetConfigRaw
	I0816 06:12:27.602918    5252 main.go:141] libmachine: Creating machine...
	I0816 06:12:27.602942    5252 main.go:141] libmachine: (offline-docker-266000) Calling .Create
	I0816 06:12:27.603028    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.603215    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:12:27.603022    5452 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:12:27.603259    5252 main.go:141] libmachine: (offline-docker-266000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:12:27.836412    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:12:27.836348    5452 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/id_rsa...
	I0816 06:12:27.945040    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:12:27.945003    5452 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk...
	I0816 06:12:27.945054    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Writing magic tar header
	I0816 06:12:27.945064    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Writing SSH key tar header
	I0816 06:12:27.945718    5252 main.go:141] libmachine: (offline-docker-266000) DBG | I0816 06:12:27.945671    5452 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000 ...
	I0816 06:12:28.323112    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:28.323136    5252 main.go:141] libmachine: (offline-docker-266000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid
	I0816 06:12:28.323150    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Using UUID bee82ece-6c40-40b1-8edc-24fe0ce2ba30
	I0816 06:12:28.348960    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Generated MAC ea:2c:a4:97:80:a3
	I0816 06:12:28.348977    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000
	I0816 06:12:28.349010    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bee82ece-6c40-40b1-8edc-24fe0ce2ba30", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 06:12:28.349034    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bee82ece-6c40-40b1-8edc-24fe0ce2ba30", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0816 06:12:28.349081    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bee82ece-6c40-40b1-8edc-24fe0ce2ba30", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage,
/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000"}
	I0816 06:12:28.349115    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bee82ece-6c40-40b1-8edc-24fe0ce2ba30 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/offline-docker-266000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machi
nes/offline-docker-266000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-266000"
	I0816 06:12:28.349134    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:12:28.352228    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 DEBUG: hyperkit: Pid is 5453
	I0816 06:12:28.352678    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 0
	I0816 06:12:28.352692    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:28.352792    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:28.353866    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:28.353961    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:28.353977    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:28.354011    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:28.354043    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:28.354100    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:28.354133    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:28.354143    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:28.354151    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:28.354168    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:28.354183    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:28.354206    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:28.354216    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:28.354226    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:28.354235    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:28.354241    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:28.354255    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:28.354265    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:28.354271    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:28.354279    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:28.360449    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:12:28.368492    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/offline-docker-266000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:12:28.369339    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:12:28.369356    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:12:28.369363    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:12:28.369372    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:12:28.741221    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:12:28.741237    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:12:28.855772    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:12:28.855787    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:12:28.855799    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:12:28.855818    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:12:28.856704    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:12:28.856716    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:12:30.356091    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 1
	I0816 06:12:30.356105    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:30.356204    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:30.356997    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:30.357049    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:30.357058    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:30.357067    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:30.357074    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:30.357097    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:30.357104    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:30.357113    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:30.357123    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:30.357140    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:30.357152    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:30.357162    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:30.357171    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:30.357178    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:30.357184    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:30.357197    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:30.357210    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:30.357221    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:30.357228    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:30.357234    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:32.359208    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 2
	I0816 06:12:32.359222    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:32.359319    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:32.360108    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:32.360148    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:32.360158    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:32.360168    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:32.360177    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:32.360214    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:32.360237    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:32.360246    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:32.360258    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:32.360268    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:32.360284    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:32.360296    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:32.360304    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:32.360311    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:32.360318    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:32.360324    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:32.360329    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:32.360336    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:32.360344    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:32.360355    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:34.231856    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 06:12:34.232037    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 06:12:34.232052    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 06:12:34.251590    5252 main.go:141] libmachine: (offline-docker-266000) DBG | 2024/08/16 06:12:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 06:12:34.360735    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 3
	I0816 06:12:34.360765    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:34.360942    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:34.362373    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:34.362493    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:34.362517    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:34.362531    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:34.362542    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:34.362560    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:34.362581    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:34.362594    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:34.362605    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:34.362666    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:34.362692    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:34.362707    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:34.362724    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:34.362738    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:34.362761    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:34.362790    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:34.362802    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:34.362810    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:34.362822    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:34.362833    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:36.364382    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 4
	I0816 06:12:36.364408    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:36.364477    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:36.365280    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:36.365338    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:36.365352    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:36.365372    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:36.365380    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:36.365400    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:36.365414    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:36.365433    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:36.365448    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:36.365461    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:36.365468    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:36.365475    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:36.365481    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:36.365511    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:36.365522    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:36.365530    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:36.365560    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:36.365574    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:36.365586    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:36.365596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:38.367474    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 5
	I0816 06:12:38.367489    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:38.367563    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:38.368564    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:38.368616    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:38.368634    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:38.368645    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:38.368654    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:38.368661    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:38.368667    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:38.368674    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:38.368681    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:38.368687    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:38.368703    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:38.368715    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:38.368725    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:38.368734    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:38.368745    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:38.368755    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:38.368763    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:38.368777    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:38.368785    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:38.368793    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:40.370194    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 6
	I0816 06:12:40.370207    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:40.370324    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:40.371134    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:40.371174    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:40.371189    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:40.371202    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:40.371212    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:40.371226    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:40.371250    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:40.371266    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:40.371277    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:40.371284    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:40.371293    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:40.371304    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:40.371314    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:40.371321    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:40.371330    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:40.371359    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:40.371372    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:40.371379    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:40.371386    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:40.371409    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:42.373329    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 7
	I0816 06:12:42.373341    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:42.373407    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:42.374227    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:42.374261    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:42.374274    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:42.374292    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:42.374300    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:42.374309    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:42.374318    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:42.374342    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:42.374351    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:42.374359    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:42.374366    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:42.374373    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:42.374382    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:42.374391    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:42.374400    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:42.374408    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:42.374415    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:42.374422    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:42.374431    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:42.374448    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:44.376449    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 8
	I0816 06:12:44.376471    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:44.376510    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:44.377302    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:44.377341    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:44.377351    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:44.377369    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:44.377379    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:44.377387    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:44.377394    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:44.377401    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:44.377409    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:44.377420    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:44.377428    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:44.377436    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:44.377444    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:44.377452    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:44.377459    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:44.377465    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:44.377473    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:44.377480    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:44.377488    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:44.377507    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:46.378913    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 9
	I0816 06:12:46.378929    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:46.378962    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:46.379811    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:46.379856    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:46.379866    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:46.379876    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:46.379890    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:46.379898    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:46.379904    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:46.379911    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:46.379919    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:46.379929    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:46.379941    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:46.379949    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:46.379958    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:46.379965    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:46.379973    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:46.379980    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:46.379986    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:46.380000    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:46.380012    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:46.380022    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:48.381983    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 10
	I0816 06:12:48.382004    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:48.382077    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:48.382904    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:48.382965    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:48.382977    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:48.382991    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:48.383001    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:48.383008    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:48.383016    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:48.383026    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:48.383035    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:48.383050    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:48.383062    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:48.383074    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:48.383083    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:48.383099    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:48.383109    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:48.383116    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:48.383123    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:48.383135    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:48.383147    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:48.383157    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:50.385081    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 11
	I0816 06:12:50.385094    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:50.385183    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:50.386231    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:50.386246    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:50.386258    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:50.386270    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:50.386277    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:50.386284    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:50.386290    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:50.386297    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:50.386304    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:50.386310    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:50.386318    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:50.386324    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:50.386339    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:50.386348    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:50.386355    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:50.386366    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:50.386376    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:50.386384    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:50.386393    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:50.386402    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:52.386442    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 12
	I0816 06:12:52.386453    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:52.386525    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:52.387288    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:52.387343    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:52.387351    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:52.387362    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:52.387375    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:52.387391    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:52.387403    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:52.387412    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:52.387419    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:52.387428    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:52.387438    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:52.387456    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:52.387463    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:52.387471    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:52.387496    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:52.387506    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:52.387513    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:52.387529    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:52.387539    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:52.387547    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:54.389500    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 13
	I0816 06:12:54.389522    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:54.389573    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:54.390362    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:54.390411    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:54.390426    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:54.390435    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:54.390442    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:54.390450    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:54.390462    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:54.390469    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:54.390475    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:54.390487    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:54.390500    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:54.390515    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:54.390535    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:54.390551    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:54.390563    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:54.390575    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:54.390582    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:54.390606    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:54.390618    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:54.390631    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:56.391570    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 14
	I0816 06:12:56.391584    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:56.391650    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:56.392470    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:56.392538    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:56.392547    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:56.392558    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:56.392564    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:56.392581    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:56.392595    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:56.392603    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:56.392615    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:56.392623    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:56.392632    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:56.392642    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:56.392648    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:56.392655    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:56.392668    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:56.392682    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:56.392694    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:56.392714    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:56.392733    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:56.392751    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:58.393486    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 15
	I0816 06:12:58.393497    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:58.393588    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:12:58.394378    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:12:58.394423    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:58.394444    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:58.394468    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:58.394477    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:58.394492    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:58.394501    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:58.394515    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:58.394528    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:58.394536    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:58.394545    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:58.394551    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:58.394566    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:58.394577    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:58.394589    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:58.394596    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:58.394618    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:58.394650    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:58.394656    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:58.394664    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:00.396544    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 16
	I0816 06:13:00.396558    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:00.396659    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:00.397442    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:00.397502    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:00.397514    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:00.397523    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:00.397530    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:00.397538    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:00.397544    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:00.397552    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:00.397558    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:00.397566    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:00.397575    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:00.397582    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:00.397590    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:00.397603    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:00.397612    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:00.397619    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:00.397627    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:00.397639    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:00.397650    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:00.397660    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:02.397858    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 17
	I0816 06:13:02.397872    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:02.397934    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:02.398900    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:02.398935    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:02.398943    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:02.398956    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:02.398969    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:02.398978    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:02.398987    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:02.398995    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:02.399001    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:02.399007    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:02.399021    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:02.399029    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:02.399036    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:02.399044    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:02.399052    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:02.399060    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:02.399066    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:02.399074    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:02.399081    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:02.399089    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:04.399979    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 18
	I0816 06:13:04.399991    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:04.400042    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:04.400829    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:04.400893    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:04.400906    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:04.400921    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:04.400934    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:04.400943    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:04.400951    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:04.400959    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:04.400968    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:04.400975    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:04.400981    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:04.401000    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:04.401015    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:04.401030    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:04.401042    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:04.401050    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:04.401058    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:04.401069    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:04.401077    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:04.401087    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:06.401677    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 19
	I0816 06:13:06.401692    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:06.401776    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:06.402625    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:06.402689    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:06.402700    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:06.402715    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:06.402726    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:06.402734    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:06.402741    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:06.402747    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:06.402755    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:06.402770    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:06.402784    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:06.402793    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:06.402799    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:06.402821    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:06.402836    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:06.402844    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:06.402852    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:06.402867    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:06.402886    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:06.402902    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:08.403777    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 20
	I0816 06:13:08.403792    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:08.403894    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:08.404715    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:08.404763    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:08.404775    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:08.404786    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:08.404807    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:08.404821    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:08.404832    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:08.404840    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:08.404849    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:08.404860    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:08.404868    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:08.404874    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:08.404889    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:08.404902    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:08.404915    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:08.404922    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:08.404928    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:08.404955    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:08.404972    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:08.404981    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:10.406871    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 21
	I0816 06:13:10.406884    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:10.406931    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:10.407744    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:10.407813    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:10.407824    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:10.407832    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:10.407840    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:10.407853    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:10.407872    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:10.407881    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:10.407891    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:10.407905    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:10.407912    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:10.407919    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:10.407926    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:10.407932    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:10.407938    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:10.407958    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:10.407970    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:10.407978    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:10.407993    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:10.408004    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:12.408521    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 22
	I0816 06:13:12.408534    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:12.408630    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:12.409446    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:12.409497    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:12.409506    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:12.409513    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:12.409520    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:12.409528    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:12.409534    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:12.409546    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:12.409556    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:12.409564    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:12.409570    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:12.409582    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:12.409595    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:12.409603    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:12.409610    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:12.409617    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:12.409665    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:12.409700    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:12.409707    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:12.409715    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:14.410646    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 23
	I0816 06:13:14.410661    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:14.410702    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:14.411797    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:14.411820    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:14.411840    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:14.411855    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:14.411866    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:14.411876    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:14.411884    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:14.411891    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:14.411902    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:14.411910    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:14.411917    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:14.411930    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:14.411940    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:14.411953    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:14.411963    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:14.411971    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:14.411985    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:14.411992    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:14.411998    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:14.412016    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:16.412070    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 24
	I0816 06:13:16.412081    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:16.412145    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:16.412955    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:16.412999    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:16.413012    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:16.413036    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:16.413046    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:16.413053    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:16.413063    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:16.413069    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:16.413075    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:16.413082    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:16.413090    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:16.413103    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:16.413112    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:16.413118    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:16.413125    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:16.413133    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:16.413143    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:16.413151    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:16.413158    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:16.413165    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:18.413672    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 25
	I0816 06:13:18.413689    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:18.413752    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:18.414578    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:18.414635    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:18.414645    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:18.414654    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:18.414663    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:18.414669    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:18.414678    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:18.414685    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:18.414693    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:18.414711    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:18.414726    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:18.414734    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:18.414744    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:18.414754    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:18.414763    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:18.414770    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:18.414777    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:18.414791    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:18.414799    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:18.414810    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:20.416163    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 26
	I0816 06:13:20.416177    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:20.416283    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:20.417089    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:20.417142    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:20.417159    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:20.417189    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:20.417209    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:20.417230    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:20.417240    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:20.417249    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:20.417257    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:20.417273    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:20.417287    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:20.417295    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:20.417311    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:20.417318    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:20.417324    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:20.417331    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:20.417347    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:20.417361    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:20.417370    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:20.417378    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:22.417423    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 27
	I0816 06:13:22.417459    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:22.417520    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:22.418561    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:22.418604    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:22.418613    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:22.418630    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:22.418645    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:22.418657    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:22.418667    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:22.418675    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:22.418685    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:22.418692    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:22.418708    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:22.418724    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:22.418736    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:22.418744    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:22.418752    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:22.418759    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:22.418773    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:22.418788    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:22.418796    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:22.418805    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:24.419949    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 28
	I0816 06:13:24.419961    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:24.420043    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:24.420873    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:24.420916    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:24.420927    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:24.420937    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:24.420944    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:24.420951    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:24.420958    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:24.420964    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:24.420971    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:24.420988    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:24.420998    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:24.421005    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:24.421013    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:24.421023    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:24.421030    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:24.421041    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:24.421052    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:24.421060    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:24.421066    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:24.421074    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:26.421144    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Attempt 29
	I0816 06:13:26.421155    5252 main.go:141] libmachine: (offline-docker-266000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:26.421234    5252 main.go:141] libmachine: (offline-docker-266000) DBG | hyperkit pid from json: 5453
	I0816 06:13:26.422092    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Searching for ea:2c:a4:97:80:a3 in /var/db/dhcpd_leases ...
	I0816 06:13:26.422134    5252 main.go:141] libmachine: (offline-docker-266000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:26.422145    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:26.422154    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:26.422160    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:26.422169    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:26.422177    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:26.422185    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:26.422192    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:26.422200    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:26.422215    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:26.422223    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:26.422239    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:26.422271    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:26.422279    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:26.422288    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:26.422298    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:26.422306    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:26.422312    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:26.422329    5252 main.go:141] libmachine: (offline-docker-266000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:28.424398    5252 client.go:171] duration metric: took 1m0.854383344s to LocalClient.Create
	I0816 06:13:30.424931    5252 start.go:128] duration metric: took 1m2.887135575s to createHost
	I0816 06:13:30.424943    5252 start.go:83] releasing machines lock for "offline-docker-266000", held for 1m2.887239598s
	W0816 06:13:30.425049    5252 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-266000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:2c:a4:97:80:a3
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-266000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:2c:a4:97:80:a3
	I0816 06:13:30.509297    5252 out.go:201] 
	W0816 06:13:30.530284    5252 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:2c:a4:97:80:a3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ea:2c:a4:97:80:a3
	W0816 06:13:30.530295    5252 out.go:270] * 
	* 
	W0816 06:13:30.530998    5252 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:13:30.614096    5252 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-266000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-16 06:13:30.725237 -0700 PDT m=+3237.414836811
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-266000 -n offline-docker-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-266000 -n offline-docker-266000: exit status 7 (80.914831ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:13:30.804255    5463 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:13:30.804278    5463 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-266000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-266000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-266000: (5.261842441s)
--- FAIL: TestOffline (195.33s)

                                                
                                    
x
+
TestCertOptions (251.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-610000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0816 06:20:04.461346    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:20:32.180400    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-610000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.160128882s)

                                                
                                                
-- stdout --
	* [cert-options-610000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-610000" primary control-plane node in "cert-options-610000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-610000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:46:88:4c:b0:57
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-610000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:ff:d8:52:f7:b0
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:ff:d8:52:f7:b0
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-610000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-610000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-610000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (162.508577ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-610000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-610000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-610000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-610000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-610000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (162.187501ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-610000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-610000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-610000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-16 06:22:57.819078 -0700 PDT m=+3804.523842558
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-610000 -n cert-options-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-610000 -n cert-options-610000: exit status 7 (78.503295ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:22:57.895919    5650 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:22:57.895943    5650 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-610000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-610000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-610000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-610000: (5.233599389s)
--- FAIL: TestCertOptions (251.84s)

                                                
                                    
x
+
TestCertExpiration (1752.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0816 06:17:48.341233    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:17:52.899559    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:18:11.037116    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:18:27.947347    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.358011423s)

                                                
                                                
-- stdout --
	* [cert-expiration-624000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-624000" primary control-plane node in "cert-expiration-624000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-624000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a6:60:c4:a6:2b:76
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-624000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:41:b3:c4:6f:63
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:41:b3:c4:6f:63
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0816 06:22:52.889512    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0816 06:25:04.453123    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (22m0.80541821s)

                                                
                                                
-- stdout --
	* [cert-expiration-624000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-624000" primary control-plane node in "cert-expiration-624000" cluster
	* Updating the running hyperkit "cert-expiration-624000" VM ...
	* Updating the running hyperkit "cert-expiration-624000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-624000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-624000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-624000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-624000" primary control-plane node in "cert-expiration-624000" cluster
	* Updating the running hyperkit "cert-expiration-624000" VM ...
	* Updating the running hyperkit "cert-expiration-624000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-624000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-16 06:46:55.400315 -0700 PDT m=+5241.954276171
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-624000 -n cert-expiration-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-624000 -n cert-expiration-624000: exit status 7 (78.146016ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:46:55.476614    6995 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:46:55.476635    6995 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-624000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-624000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-624000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-624000: (5.23795126s)
--- FAIL: TestCertExpiration (1752.48s)

                                                
                                    
x
+
TestDockerFlags (252.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-985000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0816 06:15:04.468850    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.475982    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.488127    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.511518    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.554302    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.637676    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:04.801028    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:05.124298    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:05.766829    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:07.048999    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:09.610716    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:14.734048    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:24.975638    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:15:45.458282    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:16:26.420638    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-985000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.319322245s)

                                                
                                                
-- stdout --
	* [docker-flags-985000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-985000" primary control-plane node in "docker-flags-985000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-985000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:14:39.283979    5502 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:14:39.284249    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:14:39.284254    5502 out.go:358] Setting ErrFile to fd 2...
	I0816 06:14:39.284258    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:14:39.284434    5502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:14:39.285964    5502 out.go:352] Setting JSON to false
	I0816 06:14:39.308589    5502 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3857,"bootTime":1723810222,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:14:39.308701    5502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:14:39.331882    5502 out.go:177] * [docker-flags-985000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:14:39.374069    5502 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:14:39.374093    5502 notify.go:220] Checking for updates...
	I0816 06:14:39.416019    5502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:14:39.437020    5502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:14:39.457850    5502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:14:39.479079    5502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:14:39.500059    5502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:14:39.521207    5502 config.go:182] Loaded profile config "force-systemd-flag-222000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:14:39.521302    5502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:14:39.549975    5502 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 06:14:39.591746    5502 start.go:297] selected driver: hyperkit
	I0816 06:14:39.591763    5502 start.go:901] validating driver "hyperkit" against <nil>
	I0816 06:14:39.591773    5502 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:14:39.594905    5502 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:14:39.595018    5502 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:14:39.603556    5502 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:14:39.607507    5502 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:14:39.607530    5502 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:14:39.607563    5502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 06:14:39.607756    5502 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0816 06:14:39.607819    5502 cni.go:84] Creating CNI manager for ""
	I0816 06:14:39.607839    5502 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 06:14:39.607850    5502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 06:14:39.607928    5502 start.go:340] cluster config:
	{Name:docker-flags-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:14:39.608022    5502 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:14:39.628968    5502 out.go:177] * Starting "docker-flags-985000" primary control-plane node in "docker-flags-985000" cluster
	I0816 06:14:39.670806    5502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:14:39.670845    5502 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:14:39.670874    5502 cache.go:56] Caching tarball of preloaded images
	I0816 06:14:39.670987    5502 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:14:39.670997    5502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:14:39.671077    5502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/docker-flags-985000/config.json ...
	I0816 06:14:39.671096    5502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/docker-flags-985000/config.json: {Name:mkcd1da6fb2f2a5c60ec23bb7d808fe5156c8927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:14:39.671404    5502 start.go:360] acquireMachinesLock for docker-flags-985000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:15:36.498914    5502 start.go:364] duration metric: took 56.829011635s to acquireMachinesLock for "docker-flags-985000"
	I0816 06:15:36.498952    5502 start.go:93] Provisioning new machine with config: &{Name:docker-flags-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:15:36.499012    5502 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:15:36.520506    5502 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:15:36.520646    5502 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:15:36.520689    5502 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:15:36.529308    5502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53787
	I0816 06:15:36.529647    5502 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:15:36.530181    5502 main.go:141] libmachine: Using API Version  1
	I0816 06:15:36.530192    5502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:15:36.530466    5502 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:15:36.530607    5502 main.go:141] libmachine: (docker-flags-985000) Calling .GetMachineName
	I0816 06:15:36.530715    5502 main.go:141] libmachine: (docker-flags-985000) Calling .DriverName
	I0816 06:15:36.530852    5502 start.go:159] libmachine.API.Create for "docker-flags-985000" (driver="hyperkit")
	I0816 06:15:36.530875    5502 client.go:168] LocalClient.Create starting
	I0816 06:15:36.530915    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:15:36.530973    5502 main.go:141] libmachine: Decoding PEM data...
	I0816 06:15:36.530991    5502 main.go:141] libmachine: Parsing certificate...
	I0816 06:15:36.531051    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:15:36.531088    5502 main.go:141] libmachine: Decoding PEM data...
	I0816 06:15:36.531102    5502 main.go:141] libmachine: Parsing certificate...
	I0816 06:15:36.531113    5502 main.go:141] libmachine: Running pre-create checks...
	I0816 06:15:36.531124    5502 main.go:141] libmachine: (docker-flags-985000) Calling .PreCreateCheck
	I0816 06:15:36.531195    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.531381    5502 main.go:141] libmachine: (docker-flags-985000) Calling .GetConfigRaw
	I0816 06:15:36.562518    5502 main.go:141] libmachine: Creating machine...
	I0816 06:15:36.562531    5502 main.go:141] libmachine: (docker-flags-985000) Calling .Create
	I0816 06:15:36.562615    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.562764    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:15:36.562606    5523 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:15:36.562793    5502 main.go:141] libmachine: (docker-flags-985000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:15:36.768635    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:15:36.768530    5523 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/id_rsa...
	I0816 06:15:36.877240    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:15:36.877169    5523 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk...
	I0816 06:15:36.877251    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Writing magic tar header
	I0816 06:15:36.877260    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Writing SSH key tar header
	I0816 06:15:36.877849    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:15:36.877813    5523 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000 ...
	I0816 06:15:37.253646    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:37.253670    5502 main.go:141] libmachine: (docker-flags-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid
	I0816 06:15:37.253680    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Using UUID 29b6a919-fc98-462a-b797-8890076f11a0
	I0816 06:15:37.279172    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Generated MAC 6a:ad:46:c:4e:fb
	I0816 06:15:37.279197    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000
	I0816 06:15:37.279256    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"29b6a919-fc98-462a-b797-8890076f11a0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 06:15:37.279297    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"29b6a919-fc98-462a-b797-8890076f11a0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 06:15:37.279351    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "29b6a919-fc98-462a-b797-8890076f11a0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage,/Users/jenkins/m
inikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000"}
	I0816 06:15:37.279397    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 29b6a919-fc98-462a-b797-8890076f11a0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags
-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000"
	I0816 06:15:37.279428    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:15:37.282267    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 DEBUG: hyperkit: Pid is 5524
	I0816 06:15:37.282690    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 0
	I0816 06:15:37.282703    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:37.282810    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:37.283787    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:37.283887    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:37.283913    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:37.283930    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:37.283979    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:37.283998    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:37.284017    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:37.284044    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:37.284068    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:37.284087    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:37.284101    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:37.284117    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:37.284133    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:37.284146    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:37.284160    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:37.284173    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:37.284187    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:37.284202    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:37.284221    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:37.284236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:37.290068    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:15:37.298120    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:15:37.298949    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:15:37.298995    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:15:37.299023    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:15:37.299035    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:15:37.675745    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:15:37.675760    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:15:37.790288    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:15:37.790304    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:15:37.790316    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:15:37.790330    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:15:37.791278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:15:37.791292    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:15:39.284401    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 1
	I0816 06:15:39.284415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:39.284527    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:39.285341    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:39.285372    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:39.285384    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:39.285407    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:39.285430    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:39.285442    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:39.285450    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:39.285460    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:39.285466    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:39.285486    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:39.285496    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:39.285504    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:39.285512    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:39.285530    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:39.285542    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:39.285553    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:39.285564    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:39.285579    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:39.285592    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:39.285608    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:41.287383    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 2
	I0816 06:15:41.287400    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:41.287468    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:41.288333    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:41.288372    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:41.288383    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:41.288390    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:41.288401    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:41.288409    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:41.288415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:41.288435    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:41.288442    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:41.288449    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:41.288455    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:41.288461    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:41.288470    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:41.288479    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:41.288486    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:41.288503    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:41.288515    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:41.288523    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:41.288529    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:41.288538    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:43.161771    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:15:43.161985    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:15:43.161998    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:15:43.185391    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:15:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:15:43.289171    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 3
	I0816 06:15:43.289199    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:43.289383    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:43.290813    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:43.290936    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:43.290956    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:43.290974    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:43.290988    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:43.291002    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:43.291014    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:43.291027    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:43.291043    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:43.291069    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:43.291101    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:43.291131    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:43.291142    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:43.291162    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:43.291178    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:43.291190    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:43.291201    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:43.291211    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:43.291223    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:43.291235    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:45.292331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 4
	I0816 06:15:45.292347    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:45.292425    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:45.293254    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:45.293324    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:45.293338    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:45.293346    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:45.293358    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:45.293365    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:45.293371    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:45.293388    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:45.293397    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:45.293404    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:45.293412    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:45.293426    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:45.293439    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:45.293447    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:45.293458    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:45.293466    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:45.293473    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:45.293481    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:45.293489    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:45.293502    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:47.295429    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 5
	I0816 06:15:47.295441    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:47.295479    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:47.296648    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:47.296699    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:47.296711    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:47.296720    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:47.296726    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:47.296735    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:47.296745    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:47.296760    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:47.296767    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:47.296776    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:47.296795    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:47.296808    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:47.296825    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:47.296834    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:47.296841    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:47.296849    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:47.296864    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:47.296877    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:47.296893    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:47.296904    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:49.298676    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 6
	I0816 06:15:49.298688    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:49.298797    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:49.299819    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:49.299867    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:49.299876    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:49.299883    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:49.299888    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:49.299898    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:49.299915    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:49.299925    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:49.299931    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:49.299937    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:49.299945    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:49.299959    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:49.299972    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:49.299980    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:49.299989    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:49.299998    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:49.300006    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:49.300017    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:49.300028    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:49.300036    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:51.300702    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 7
	I0816 06:15:51.300716    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:51.300769    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:51.301580    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:51.301640    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:51.301653    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:51.301683    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:51.301695    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:51.301711    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:51.301720    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:51.301732    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:51.301740    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:51.301746    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:51.301754    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:51.301761    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:51.301773    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:51.301782    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:51.301791    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:51.301798    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:51.301807    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:51.301814    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:51.301820    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:51.301836    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:53.303736    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 8
	I0816 06:15:53.303749    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:53.303814    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:53.304569    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:53.304621    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:53.304630    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:53.304639    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:53.304645    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:53.304661    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:53.304677    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:53.304685    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:53.304692    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:53.304706    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:53.304718    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:53.304732    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:53.304743    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:53.304752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:53.304760    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:53.304771    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:53.304780    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:53.304789    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:53.304798    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:53.304809    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:55.306769    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 9
	I0816 06:15:55.306784    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:55.306850    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:55.307687    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:55.307737    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:55.307752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:55.307764    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:55.307770    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:55.307787    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:55.307796    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:55.307804    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:55.307810    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:55.307817    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:55.307823    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:55.307840    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:55.307853    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:55.307862    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:55.307868    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:55.307880    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:55.307892    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:55.307900    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:55.307914    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:55.307927    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:57.309853    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 10
	I0816 06:15:57.309867    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:57.309945    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:57.310839    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:57.310886    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:57.310900    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:57.310910    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:57.310916    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:57.310929    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:57.310939    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:57.310947    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:57.310955    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:57.310963    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:57.310977    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:57.310985    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:57.310994    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:57.311014    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:57.311022    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:57.311029    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:57.311037    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:57.311060    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:57.311073    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:57.311082    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:59.312978    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 11
	I0816 06:15:59.312989    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:59.313047    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:15:59.313826    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:15:59.313883    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:59.313891    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:59.313897    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:59.313903    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:59.313915    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:59.313931    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:59.313939    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:59.313947    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:59.313967    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:59.313977    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:59.313986    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:59.313993    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:59.314000    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:59.314008    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:59.314014    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:59.314022    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:59.314029    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:59.314037    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:59.314045    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:01.314679    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 12
	I0816 06:16:01.314692    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:01.314823    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:01.315640    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:01.315699    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:01.315712    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:01.315725    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:01.315732    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:01.315740    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:01.315747    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:01.315754    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:01.315762    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:01.315768    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:01.315777    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:01.315785    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:01.315792    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:01.315800    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:01.315806    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:01.315814    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:01.315820    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:01.315838    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:01.315845    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:01.315861    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:03.317879    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 13
	I0816 06:16:03.317893    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:03.318007    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:03.318829    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:03.318878    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:03.318888    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:03.318897    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:03.318905    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:03.318922    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:03.318930    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:03.318937    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:03.318946    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:03.318955    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:03.318962    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:03.318979    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:03.318991    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:03.319001    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:03.319008    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:03.319015    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:03.319023    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:03.319030    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:03.319039    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:03.319048    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:05.321001    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 14
	I0816 06:16:05.321013    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:05.321070    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:05.321916    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:05.321970    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:05.321982    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:05.321994    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:05.322006    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:05.322013    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:05.322019    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:05.322042    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:05.322065    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:05.322075    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:05.322083    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:05.322093    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:05.322101    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:05.322109    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:05.322116    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:05.322136    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:05.322146    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:05.322156    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:05.322164    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:05.322172    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:07.324050    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 15
	I0816 06:16:07.324065    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:07.324118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:07.324930    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:07.324993    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:07.325003    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:07.325011    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:07.325018    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:07.325024    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:07.325032    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:07.325040    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:07.325048    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:07.325055    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:07.325061    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:07.325072    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:07.325078    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:07.325084    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:07.325091    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:07.325099    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:07.325106    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:07.325113    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:07.325120    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:07.325128    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:09.325750    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 16
	I0816 06:16:09.325761    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:09.325821    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:09.326600    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:09.326666    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:09.326678    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:09.326685    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:09.326693    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:09.326701    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:09.326708    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:09.326717    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:09.326725    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:09.326731    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:09.326739    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:09.326746    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:09.326752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:09.326766    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:09.326781    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:09.326798    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:09.326810    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:09.326818    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:09.326826    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:09.326843    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:11.327891    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 17
	I0816 06:16:11.327912    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:11.327975    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:11.328752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:11.328808    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:11.328818    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:11.328827    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:11.328833    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:11.328839    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:11.328845    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:11.328852    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:11.328860    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:11.328879    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:11.328891    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:11.328899    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:11.328907    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:11.328920    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:11.328931    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:11.328943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:11.328951    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:11.328959    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:11.328967    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:11.328981    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:13.329248    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 18
	I0816 06:16:13.329260    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:13.329319    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:13.330088    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:13.330118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:13.330136    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:13.330153    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:13.330165    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:13.330173    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:13.330184    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:13.330201    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:13.330212    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:13.330221    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:13.330236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:13.330250    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:13.330262    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:13.330279    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:13.330303    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:13.330312    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:13.330320    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:13.330331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:13.330339    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:13.330347    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:15.330666    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 19
	I0816 06:16:15.330677    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:15.330759    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:15.331758    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:15.331818    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:15.331831    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:15.331860    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:15.331869    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:15.331877    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:15.331898    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:15.331913    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:15.331924    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:15.331932    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:15.331939    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:15.331946    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:15.331968    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:15.331979    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:15.331987    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:15.331995    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:15.332003    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:15.332011    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:15.332018    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:15.332030    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:17.333788    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 20
	I0816 06:16:17.333800    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:17.333868    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:17.334697    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:17.334754    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:17.334765    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:17.334775    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:17.334784    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:17.334795    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:17.334802    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:17.334809    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:17.334815    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:17.334835    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:17.334846    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:17.334861    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:17.334870    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:17.334876    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:17.334890    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:17.334908    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:17.334922    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:17.334938    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:17.334948    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:17.334965    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:19.335731    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 21
	I0816 06:16:19.335742    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:19.335781    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:19.336614    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:19.336678    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:19.336693    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:19.336703    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:19.336711    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:19.336718    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:19.336725    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:19.336741    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:19.336754    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:19.336762    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:19.336770    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:19.336777    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:19.336785    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:19.336791    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:19.336799    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:19.336806    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:19.336812    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:19.336824    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:19.336837    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:19.336847    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:21.338778    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 22
	I0816 06:16:21.338788    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:21.338862    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:21.339697    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:21.339751    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:21.339764    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:21.339774    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:21.339781    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:21.339786    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:21.339793    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:21.339799    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:21.339812    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:21.339824    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:21.339831    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:21.339838    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:21.339845    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:21.339854    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:21.339863    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:21.339872    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:21.339888    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:21.339897    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:21.339913    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:21.339924    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:23.341877    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 23
	I0816 06:16:23.341889    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:23.341968    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:23.342774    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:23.342828    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:23.342846    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:23.342857    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:23.342863    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:23.342881    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:23.342891    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:23.342899    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:23.342907    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:23.342924    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:23.342937    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:23.342946    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:23.342954    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:23.342961    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:23.342967    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:23.342973    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:23.342980    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:23.342993    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:23.343002    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:23.343010    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:25.343229    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 24
	I0816 06:16:25.343247    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:25.343336    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:25.344162    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:25.344212    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:25.344220    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:25.344229    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:25.344235    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:25.344256    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:25.344262    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:25.344270    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:25.344278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:25.344284    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:25.344293    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:25.344299    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:25.344307    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:25.344321    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:25.344333    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:25.344347    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:25.344356    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:25.344365    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:25.344374    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:25.344383    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:27.346315    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 25
	I0816 06:16:27.346329    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:27.346373    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:27.347193    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:27.347236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:27.347246    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:27.347255    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:27.347262    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:27.347269    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:27.347275    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:27.347286    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:27.347298    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:27.347304    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:27.347319    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:27.347332    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:27.347355    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:27.347368    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:27.347376    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:27.347384    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:27.347391    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:27.347399    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:27.347406    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:27.347413    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:29.348695    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 26
	I0816 06:16:29.348708    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:29.348781    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:29.349598    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:29.349637    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:29.349650    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:29.349658    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:29.349667    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:29.349692    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:29.349706    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:29.349714    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:29.349723    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:29.349731    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:29.349739    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:29.349752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:29.349763    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:29.349778    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:29.349788    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:29.349796    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:29.349805    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:29.349816    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:29.349825    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:29.349847    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:31.350987    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 27
	I0816 06:16:31.351003    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:31.351064    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:31.351890    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:31.351947    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:31.351958    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:31.351971    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:31.351978    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:31.351988    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:31.351997    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:31.352005    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:31.352011    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:31.352035    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:31.352047    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:31.352057    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:31.352065    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:31.352073    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:31.352081    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:31.352089    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:31.352103    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:31.352110    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:31.352117    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:31.352126    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:33.353077    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 28
	I0816 06:16:33.353090    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:33.353130    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:33.353976    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:33.354023    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:33.354036    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:33.354064    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:33.354071    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:33.354080    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:33.354088    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:33.354094    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:33.354110    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:33.354119    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:33.354126    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:33.354134    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:33.354147    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:33.354159    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:33.354173    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:33.354182    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:33.354192    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:33.354201    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:33.354218    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:33.354226    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:35.355184    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 29
	I0816 06:16:35.355197    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:35.355270    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:35.356161    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 6a:ad:46:c:4e:fb in /var/db/dhcpd_leases ...
	I0816 06:16:35.356223    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:35.356236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:35.356246    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:35.356252    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:35.356259    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:35.356266    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:35.356274    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:35.356285    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:35.356301    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:35.356310    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:35.356317    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:35.356323    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:35.356337    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:35.356349    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:35.356359    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:35.356366    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:35.356378    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:35.356391    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:35.356401    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:37.356464    5502 client.go:171] duration metric: took 1m0.827203015s to LocalClient.Create
	I0816 06:16:39.358507    5502 start.go:128] duration metric: took 1m2.861161475s to createHost
	I0816 06:16:39.358528    5502 start.go:83] releasing machines lock for "docker-flags-985000", held for 1m2.861279015s
	W0816 06:16:39.358542    5502 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:ad:46:c:4e:fb
	I0816 06:16:39.358874    5502 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:16:39.358905    5502 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:16:39.367456    5502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53789
	I0816 06:16:39.367806    5502 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:16:39.368185    5502 main.go:141] libmachine: Using API Version  1
	I0816 06:16:39.368206    5502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:16:39.368414    5502 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:16:39.368790    5502 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:16:39.368818    5502 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:16:39.377306    5502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53791
	I0816 06:16:39.377644    5502 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:16:39.378018    5502 main.go:141] libmachine: Using API Version  1
	I0816 06:16:39.378037    5502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:16:39.378268    5502 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:16:39.378388    5502 main.go:141] libmachine: (docker-flags-985000) Calling .GetState
	I0816 06:16:39.378481    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.378555    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:39.379499    5502 main.go:141] libmachine: (docker-flags-985000) Calling .DriverName
	I0816 06:16:39.402818    5502 out.go:177] * Deleting "docker-flags-985000" in hyperkit ...
	I0816 06:16:39.444820    5502 main.go:141] libmachine: (docker-flags-985000) Calling .Remove
	I0816 06:16:39.444956    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.444965    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.445034    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:39.445987    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.446055    5502 main.go:141] libmachine: (docker-flags-985000) DBG | waiting for graceful shutdown
	I0816 06:16:40.448183    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:40.448290    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:40.449217    5502 main.go:141] libmachine: (docker-flags-985000) DBG | waiting for graceful shutdown
	I0816 06:16:41.449538    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:41.449602    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:41.451425    5502 main.go:141] libmachine: (docker-flags-985000) DBG | waiting for graceful shutdown
	I0816 06:16:42.452596    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:42.452671    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:42.453239    5502 main.go:141] libmachine: (docker-flags-985000) DBG | waiting for graceful shutdown
	I0816 06:16:43.453943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:43.454040    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:43.454716    5502 main.go:141] libmachine: (docker-flags-985000) DBG | waiting for graceful shutdown
	I0816 06:16:44.456849    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:44.456910    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5524
	I0816 06:16:44.457862    5502 main.go:141] libmachine: (docker-flags-985000) DBG | sending sigkill
	I0816 06:16:44.457872    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:44.467561    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:16:44 WARN : hyperkit: failed to read stdout: EOF
	I0816 06:16:44.467577    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:16:44 WARN : hyperkit: failed to read stderr: EOF
	W0816 06:16:44.484236    5502 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:ad:46:c:4e:fb
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:ad:46:c:4e:fb
	I0816 06:16:44.484255    5502 start.go:729] Will try again in 5 seconds ...
	I0816 06:16:49.485553    5502 start.go:360] acquireMachinesLock for docker-flags-985000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:17:42.248085    5502 start.go:364] duration metric: took 52.763923234s to acquireMachinesLock for "docker-flags-985000"
	I0816 06:17:42.248132    5502 start.go:93] Provisioning new machine with config: &{Name:docker-flags-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:17:42.248189    5502 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:17:42.269583    5502 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:17:42.269645    5502 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:17:42.269668    5502 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:17:42.278639    5502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53795
	I0816 06:17:42.279128    5502 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:17:42.279636    5502 main.go:141] libmachine: Using API Version  1
	I0816 06:17:42.279682    5502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:17:42.280045    5502 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:17:42.280277    5502 main.go:141] libmachine: (docker-flags-985000) Calling .GetMachineName
	I0816 06:17:42.280372    5502 main.go:141] libmachine: (docker-flags-985000) Calling .DriverName
	I0816 06:17:42.280486    5502 start.go:159] libmachine.API.Create for "docker-flags-985000" (driver="hyperkit")
	I0816 06:17:42.280498    5502 client.go:168] LocalClient.Create starting
	I0816 06:17:42.280527    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:17:42.280580    5502 main.go:141] libmachine: Decoding PEM data...
	I0816 06:17:42.280590    5502 main.go:141] libmachine: Parsing certificate...
	I0816 06:17:42.280630    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:17:42.280668    5502 main.go:141] libmachine: Decoding PEM data...
	I0816 06:17:42.280679    5502 main.go:141] libmachine: Parsing certificate...
	I0816 06:17:42.280690    5502 main.go:141] libmachine: Running pre-create checks...
	I0816 06:17:42.280696    5502 main.go:141] libmachine: (docker-flags-985000) Calling .PreCreateCheck
	I0816 06:17:42.280776    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:42.280799    5502 main.go:141] libmachine: (docker-flags-985000) Calling .GetConfigRaw
	I0816 06:17:42.311392    5502 main.go:141] libmachine: Creating machine...
	I0816 06:17:42.311401    5502 main.go:141] libmachine: (docker-flags-985000) Calling .Create
	I0816 06:17:42.311483    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:42.311618    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:17:42.311478    5541 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:17:42.311667    5502 main.go:141] libmachine: (docker-flags-985000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:17:42.797183    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:17:42.797110    5541 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/id_rsa...
	I0816 06:17:42.877639    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:17:42.877555    5541 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk...
	I0816 06:17:42.877651    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Writing magic tar header
	I0816 06:17:42.877661    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Writing SSH key tar header
	I0816 06:17:42.878046    5502 main.go:141] libmachine: (docker-flags-985000) DBG | I0816 06:17:42.878013    5541 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000 ...
	I0816 06:17:43.256453    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:43.256472    5502 main.go:141] libmachine: (docker-flags-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid
	I0816 06:17:43.256504    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Using UUID 6bc17717-b38b-4ad5-a420-e828b4a8aeb4
	I0816 06:17:43.282366    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Generated MAC 7a:84:19:22:87:41
	I0816 06:17:43.282384    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000
	I0816 06:17:43.282429    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc17717-b38b-4ad5-a420-e828b4a8aeb4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 06:17:43.282457    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6bc17717-b38b-4ad5-a420-e828b4a8aeb4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0816 06:17:43.282522    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6bc17717-b38b-4ad5-a420-e828b4a8aeb4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage,/Users/jenkins/m
inikube-integration/19423-1009/.minikube/machines/docker-flags-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000"}
	I0816 06:17:43.282559    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6bc17717-b38b-4ad5-a420-e828b4a8aeb4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/docker-flags-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags
-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-985000"
	I0816 06:17:43.282581    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:17:43.285415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 DEBUG: hyperkit: Pid is 5555
	I0816 06:17:43.285842    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 0
	I0816 06:17:43.285876    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:43.285953    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:43.286900    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:43.286965    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:43.286999    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:43.287012    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:43.287033    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:43.287051    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:43.287073    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:43.287098    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:43.287118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:43.287134    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:43.287149    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:43.287172    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:43.287190    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:43.287210    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:43.287228    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:43.287238    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:43.287248    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:43.287277    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:43.287292    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:43.287310    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:43.293358    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:17:43.301260    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/docker-flags-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:17:43.302180    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:17:43.302211    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:17:43.302223    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:17:43.302236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:17:43.679281    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:17:43.679297    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:17:43.794140    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:17:43.794162    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:17:43.794195    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:17:43.794213    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:17:43.795042    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:17:43.795054    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:17:45.287101    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 1
	I0816 06:17:45.287118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:45.287166    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:45.287997    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:45.288049    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:45.288060    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:45.288070    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:45.288076    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:45.288093    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:45.288102    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:45.288109    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:45.288118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:45.288127    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:45.288135    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:45.288147    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:45.288155    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:45.288164    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:45.288172    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:45.288179    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:45.288186    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:45.288193    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:45.288201    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:45.288220    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:47.289134    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 2
	I0816 06:17:47.289150    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:47.289226    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:47.290003    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:47.290069    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:47.290080    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:47.290119    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:47.290137    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:47.290148    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:47.290164    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:47.290178    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:47.290187    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:47.290195    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:47.290202    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:47.290210    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:47.290218    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:47.290240    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:47.290254    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:47.290264    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:47.290275    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:47.290284    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:47.290293    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:47.290303    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:49.223361    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:49 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:17:49.223591    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:17:49.223601    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:17:49.244117    5502 main.go:141] libmachine: (docker-flags-985000) DBG | 2024/08/16 06:17:49 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:17:49.291180    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 3
	I0816 06:17:49.291205    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:49.291331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:49.292809    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:49.292923    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:49.292943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:49.292964    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:49.292987    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:49.293020    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:49.293047    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:49.293063    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:49.293078    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:49.293093    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:49.293104    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:49.293155    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:49.293173    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:49.293223    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:49.293248    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:49.293260    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:49.293270    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:49.293294    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:49.293310    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:49.293323    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:51.293318    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 4
	I0816 06:17:51.293334    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:51.293403    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:51.294261    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:51.294306    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:51.294315    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:51.294323    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:51.294331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:51.294347    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:51.294358    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:51.294365    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:51.294375    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:51.294383    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:51.294391    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:51.294398    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:51.294406    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:51.294413    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:51.294434    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:51.294447    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:51.294460    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:51.294474    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:51.294487    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:51.294500    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:53.295731    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 5
	I0816 06:17:53.295744    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:53.295819    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:53.296684    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:53.296734    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:53.296743    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:53.296761    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:53.296773    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:53.296783    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:53.296800    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:53.296808    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:53.296817    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:53.296826    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:53.296832    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:53.296841    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:53.296849    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:53.296865    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:53.296874    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:53.296888    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:53.296907    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:53.296916    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:53.296924    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:53.296939    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:55.298091    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 6
	I0816 06:17:55.298103    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:55.298176    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:55.299015    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:55.299063    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:55.299076    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:55.299086    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:55.299092    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:55.299107    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:55.299116    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:55.299123    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:55.299133    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:55.299147    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:55.299158    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:55.299166    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:55.299174    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:55.299181    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:55.299189    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:55.299205    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:55.299216    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:55.299233    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:55.299249    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:55.299259    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:57.299249    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 7
	I0816 06:17:57.299261    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:57.299328    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:57.300120    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:57.300163    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:57.300172    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:57.300183    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:57.300190    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:57.300199    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:57.300205    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:57.300235    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:57.300247    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:57.300259    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:57.300267    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:57.300278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:57.300286    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:57.300293    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:57.300301    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:57.300308    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:57.300314    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:57.300320    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:57.300329    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:57.300342    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:59.301183    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 8
	I0816 06:17:59.301199    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:59.301278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:17:59.302118    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:17:59.302193    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:59.302204    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:59.302214    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:59.302236    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:59.302251    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:59.302262    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:59.302271    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:59.302287    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:59.302302    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:59.302313    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:59.302335    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:59.302365    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:59.302371    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:59.302378    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:59.302384    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:59.302392    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:59.302399    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:59.302406    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:59.302415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:01.303416    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 9
	I0816 06:18:01.303442    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:01.303487    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:01.304348    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:01.304401    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:01.304426    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:01.304441    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:01.304453    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:01.304466    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:01.304482    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:01.304495    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:01.304508    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:01.304516    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:01.304540    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:01.304555    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:01.304567    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:01.304574    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:01.304582    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:01.304589    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:01.304597    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:01.304604    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:01.304612    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:01.304626    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:03.305513    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 10
	I0816 06:18:03.305526    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:03.305586    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:03.306372    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:03.306415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:03.306424    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:03.306435    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:03.306448    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:03.306465    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:03.306483    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:03.306492    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:03.306498    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:03.306504    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:03.306510    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:03.306530    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:03.306542    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:03.306551    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:03.306557    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:03.306569    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:03.306583    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:03.306591    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:03.306599    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:03.306608    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:05.307488    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 11
	I0816 06:18:05.307500    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:05.307619    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:05.308397    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:05.308442    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:05.308459    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:05.308474    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:05.308482    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:05.308490    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:05.308498    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:05.308512    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:05.308529    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:05.308537    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:05.308546    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:05.308555    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:05.308563    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:05.308572    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:05.308580    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:05.308592    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:05.308604    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:05.308621    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:05.308631    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:05.308640    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:07.310194    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 12
	I0816 06:18:07.310221    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:07.310290    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:07.311132    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:07.311177    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:07.311188    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:07.311204    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:07.311211    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:07.311218    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:07.311226    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:07.311234    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:07.311243    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:07.311251    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:07.311261    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:07.311267    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:07.311274    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:07.311282    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:07.311289    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:07.311297    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:07.311308    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:07.311316    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:07.311323    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:07.311331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:09.311916    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 13
	I0816 06:18:09.311951    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:09.312008    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:09.312815    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:09.312838    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:09.312856    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:09.312866    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:09.312876    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:09.312886    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:09.312892    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:09.312898    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:09.312906    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:09.312921    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:09.312933    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:09.312941    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:09.312948    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:09.312956    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:09.312962    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:09.312968    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:09.312987    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:09.312998    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:09.313009    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:09.313017    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:11.314978    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 14
	I0816 06:18:11.314989    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:11.315095    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:11.315868    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:11.315914    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:11.315926    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:11.315935    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:11.315941    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:11.315952    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:11.315961    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:11.315988    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:11.316001    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:11.316009    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:11.316017    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:11.316026    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:11.316040    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:11.316048    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:11.316056    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:11.316064    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:11.316070    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:11.316075    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:11.316082    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:11.316090    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:13.316515    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 15
	I0816 06:18:13.316532    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:13.316639    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:13.317417    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:13.317454    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:13.317462    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:13.317473    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:13.317482    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:13.317489    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:13.317500    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:13.317507    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:13.317515    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:13.317522    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:13.317530    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:13.317547    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:13.317560    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:13.317573    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:13.317582    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:13.317589    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:13.317596    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:13.317612    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:13.317624    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:13.317634    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:15.319108    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 16
	I0816 06:18:15.319120    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:15.319161    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:15.319962    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:15.320021    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:15.320034    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:15.320043    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:15.320055    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:15.320064    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:15.320070    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:15.320077    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:15.320083    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:15.320096    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:15.320105    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:15.320111    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:15.320119    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:15.320126    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:15.320140    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:15.320148    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:15.320155    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:15.320164    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:15.320176    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:15.320188    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:17.322116    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 17
	I0816 06:18:17.322129    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:17.322211    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:17.323072    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:17.323116    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:17.323125    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:17.323145    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:17.323163    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:17.323173    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:17.323182    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:17.323190    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:17.323202    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:17.323210    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:17.323215    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:17.323222    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:17.323232    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:17.323241    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:17.323259    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:17.323273    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:17.323281    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:17.323290    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:17.323297    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:17.323305    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:19.324726    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 18
	I0816 06:18:19.324750    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:19.324846    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:19.325664    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:19.325707    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:19.325715    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:19.325727    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:19.325734    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:19.325741    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:19.325748    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:19.325753    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:19.325760    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:19.325768    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:19.325792    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:19.325804    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:19.325812    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:19.325822    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:19.325830    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:19.325838    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:19.325854    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:19.325862    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:19.325868    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:19.325876    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:21.327812    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 19
	I0816 06:18:21.327825    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:21.327892    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:21.328682    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:21.328734    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:21.328747    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:21.328757    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:21.328766    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:21.328782    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:21.328789    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:21.328797    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:21.328804    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:21.328812    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:21.328826    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:21.328841    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:21.328860    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:21.328872    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:21.328885    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:21.328910    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:21.328919    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:21.328929    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:21.328943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:21.328955    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:23.329800    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 20
	I0816 06:18:23.329813    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:23.329893    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:23.330709    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:23.330754    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:23.330767    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:23.330778    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:23.330787    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:23.330796    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:23.330804    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:23.330810    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:23.330817    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:23.330827    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:23.330835    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:23.330843    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:23.330850    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:23.330857    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:23.330870    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:23.330879    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:23.330886    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:23.330893    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:23.330900    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:23.330908    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:25.332264    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 21
	I0816 06:18:25.332278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:25.332332    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:25.333132    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:25.333158    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:25.333166    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:25.333175    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:25.333183    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:25.333191    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:25.333197    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:25.333204    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:25.333211    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:25.333225    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:25.333243    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:25.333254    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:25.333266    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:25.333278    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:25.333295    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:25.333308    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:25.333316    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:25.333323    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:25.333328    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:25.333342    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:27.335282    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 22
	I0816 06:18:27.335302    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:27.335351    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:27.336210    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:27.336264    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:27.336277    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:27.336308    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:27.336318    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:27.336324    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:27.336331    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:27.336339    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:27.336346    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:27.336355    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:27.336361    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:27.336374    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:27.336384    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:27.336392    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:27.336400    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:27.336408    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:27.336415    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:27.336422    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:27.336430    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:27.336438    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:29.336904    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 23
	I0816 06:18:29.336917    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:29.336989    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:29.337800    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:29.337850    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:29.337860    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:29.337877    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:29.337906    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:29.337923    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:29.337934    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:29.337942    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:29.337950    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:29.337958    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:29.337964    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:29.337971    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:29.337978    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:29.337986    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:29.337992    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:29.338005    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:29.338015    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:29.338025    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:29.338036    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:29.338044    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:31.339954    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 24
	I0816 06:18:31.339969    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:31.340040    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:31.340839    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:31.340890    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:31.340899    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:31.340908    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:31.340923    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:31.340930    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:31.340937    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:31.340943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:31.340951    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:31.340957    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:31.340976    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:31.340988    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:31.341003    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:31.341014    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:31.341031    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:31.341040    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:31.341047    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:31.341055    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:31.341062    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:31.341070    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:33.342104    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 25
	I0816 06:18:33.342119    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:33.342182    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:33.342980    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:33.343023    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:33.343034    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:33.343044    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:33.343061    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:33.343068    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:33.343075    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:33.343082    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:33.343089    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:33.343095    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:33.343116    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:33.343133    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:33.343142    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:33.343151    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:33.343158    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:33.343170    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:33.343187    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:33.343201    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:33.343209    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:33.343217    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:35.343352    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 26
	I0816 06:18:35.343366    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:35.343494    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:35.344479    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:35.344535    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:35.344545    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:35.344564    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:35.344571    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:35.344579    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:35.344587    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:35.344603    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:35.344616    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:35.344629    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:35.344638    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:35.344646    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:35.344654    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:35.344669    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:35.344683    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:35.344698    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:35.344707    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:35.344714    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:35.344720    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:35.344729    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:37.344771    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 27
	I0816 06:18:37.344787    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:37.344909    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:37.345695    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:37.345752    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:37.345764    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:37.345774    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:37.345781    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:37.345788    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:37.345794    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:37.345801    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:37.345808    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:37.345814    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:37.345838    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:37.345850    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:37.345858    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:37.345866    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:37.345875    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:37.345882    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:37.345892    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:37.345898    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:37.345910    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:37.345923    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:39.347150    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 28
	I0816 06:18:39.347625    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:39.347718    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:39.348117    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:39.348199    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:39.348208    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:39.348235    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:39.348245    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:39.348259    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:39.348266    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:39.348275    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:39.348286    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:39.348346    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:39.348370    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:39.348389    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:39.348411    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:39.348427    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:39.348441    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:39.348498    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:39.348513    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:39.348522    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:39.348532    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:39.348540    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:41.348907    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Attempt 29
	I0816 06:18:41.348931    5502 main.go:141] libmachine: (docker-flags-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:18:41.348943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | hyperkit pid from json: 5555
	I0816 06:18:41.349809    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Searching for 7a:84:19:22:87:41 in /var/db/dhcpd_leases ...
	I0816 06:18:41.349870    5502 main.go:141] libmachine: (docker-flags-985000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:18:41.349880    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:18:41.349911    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:18:41.349925    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:18:41.349935    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:18:41.349943    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:18:41.349953    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:18:41.349962    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:18:41.349969    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:18:41.349977    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:18:41.349990    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:18:41.350000    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:18:41.350020    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:18:41.350030    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:18:41.350038    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:18:41.350043    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:18:41.350051    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:18:41.350059    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:18:41.350091    5502 main.go:141] libmachine: (docker-flags-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:18:43.350837    5502 client.go:171] duration metric: took 1m1.071963247s to LocalClient.Create
	I0816 06:18:45.351050    5502 start.go:128] duration metric: took 1m3.104536767s to createHost
	I0816 06:18:45.351078    5502 start.go:83] releasing machines lock for "docker-flags-985000", held for 1m3.10464962s
	W0816 06:18:45.351167    5502 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-985000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:84:19:22:87:41
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-985000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:84:19:22:87:41
	I0816 06:18:45.414410    5502 out.go:201] 
	W0816 06:18:45.435625    5502 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:84:19:22:87:41
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:84:19:22:87:41
	W0816 06:18:45.435638    5502 out.go:270] * 
	* 
	W0816 06:18:45.436286    5502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:18:45.498593    5502 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-985000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-985000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-985000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (195.101203ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-985000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-985000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-985000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-985000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (172.771514ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-985000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-985000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-985000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-16 06:18:45.97926 -0700 PDT m=+3552.677289991
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-985000 -n docker-flags-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-985000 -n docker-flags-985000: exit status 7 (80.459368ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:18:46.057667    5587 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:18:46.057690    5587 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-985000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-985000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-985000: (5.242735391s)
--- FAIL: TestDockerFlags (252.08s)

                                                
                                    
x
+
TestForceSystemdFlag (252.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-222000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-222000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.377132192s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-222000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-222000" primary control-plane node in "force-systemd-flag-222000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-222000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:13:36.121024    5473 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:13:36.121281    5473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:13:36.121286    5473 out.go:358] Setting ErrFile to fd 2...
	I0816 06:13:36.121290    5473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:13:36.121460    5473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:13:36.122916    5473 out.go:352] Setting JSON to false
	I0816 06:13:36.145655    5473 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3794,"bootTime":1723810222,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:13:36.145751    5473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:13:36.168716    5473 out.go:177] * [force-systemd-flag-222000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:13:36.210694    5473 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:13:36.210710    5473 notify.go:220] Checking for updates...
	I0816 06:13:36.252721    5473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:13:36.273805    5473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:13:36.294630    5473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:13:36.315662    5473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:13:36.336720    5473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:13:36.358123    5473 config.go:182] Loaded profile config "force-systemd-env-603000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:13:36.358225    5473 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:13:36.386671    5473 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 06:13:36.428592    5473 start.go:297] selected driver: hyperkit
	I0816 06:13:36.428610    5473 start.go:901] validating driver "hyperkit" against <nil>
	I0816 06:13:36.428621    5473 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:13:36.431888    5473 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:13:36.432016    5473 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:13:36.440785    5473 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:13:36.444762    5473 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:13:36.444783    5473 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:13:36.444825    5473 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 06:13:36.445032    5473 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 06:13:36.445061    5473 cni.go:84] Creating CNI manager for ""
	I0816 06:13:36.445076    5473 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 06:13:36.445082    5473 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 06:13:36.445140    5473 start.go:340] cluster config:
	{Name:force-systemd-flag-222000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:13:36.445227    5473 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:13:36.466677    5473 out.go:177] * Starting "force-systemd-flag-222000" primary control-plane node in "force-systemd-flag-222000" cluster
	I0816 06:13:36.487579    5473 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:13:36.487608    5473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:13:36.487621    5473 cache.go:56] Caching tarball of preloaded images
	I0816 06:13:36.487732    5473 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:13:36.487741    5473 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:13:36.487819    5473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/force-systemd-flag-222000/config.json ...
	I0816 06:13:36.487840    5473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/force-systemd-flag-222000/config.json: {Name:mk393b0333517565459e0b0a012db6fa1a5c324e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:13:36.488173    5473 start.go:360] acquireMachinesLock for force-systemd-flag-222000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:14:33.436047    5473 start.go:364] duration metric: took 56.949372277s to acquireMachinesLock for "force-systemd-flag-222000"
	I0816 06:14:33.436090    5473 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:14:33.436134    5473 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:14:33.478456    5473 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:14:33.478686    5473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:14:33.478773    5473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:14:33.487685    5473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53767
	I0816 06:14:33.488311    5473 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:14:33.489017    5473 main.go:141] libmachine: Using API Version  1
	I0816 06:14:33.489048    5473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:14:33.489326    5473 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:14:33.489458    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .GetMachineName
	I0816 06:14:33.489590    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .DriverName
	I0816 06:14:33.489704    5473 start.go:159] libmachine.API.Create for "force-systemd-flag-222000" (driver="hyperkit")
	I0816 06:14:33.489744    5473 client.go:168] LocalClient.Create starting
	I0816 06:14:33.489776    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:14:33.489828    5473 main.go:141] libmachine: Decoding PEM data...
	I0816 06:14:33.489846    5473 main.go:141] libmachine: Parsing certificate...
	I0816 06:14:33.489898    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:14:33.489935    5473 main.go:141] libmachine: Decoding PEM data...
	I0816 06:14:33.489943    5473 main.go:141] libmachine: Parsing certificate...
	I0816 06:14:33.489954    5473 main.go:141] libmachine: Running pre-create checks...
	I0816 06:14:33.489964    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .PreCreateCheck
	I0816 06:14:33.490044    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:33.490214    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .GetConfigRaw
	I0816 06:14:33.499698    5473 main.go:141] libmachine: Creating machine...
	I0816 06:14:33.499729    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .Create
	I0816 06:14:33.499892    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:33.500062    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:14:33.499898    5483 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:14:33.500125    5473 main.go:141] libmachine: (force-systemd-flag-222000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:14:33.922241    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:14:33.922186    5483 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/id_rsa...
	I0816 06:14:34.012939    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:14:34.012884    5483 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk...
	I0816 06:14:34.012956    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Writing magic tar header
	I0816 06:14:34.012969    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Writing SSH key tar header
	I0816 06:14:34.013308    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:14:34.013260    5483 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000 ...
	I0816 06:14:34.389061    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:34.389078    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid
	I0816 06:14:34.389120    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Using UUID bb4886c8-0d01-4988-a8cf-50eafcc10384
	I0816 06:14:34.414112    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Generated MAC ae:e9:d3:8f:f9:8c
	I0816 06:14:34.414128    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000
	I0816 06:14:34.414165    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bb4886c8-0d01-4988-a8cf-50eafcc10384", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:14:34.414203    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"bb4886c8-0d01-4988-a8cf-50eafcc10384", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:14:34.414279    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "bb4886c8-0d01-4988-a8cf-50eafcc10384", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/fo
rce-systemd-flag-222000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000"}
	I0816 06:14:34.414348    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U bb4886c8-0d01-4988-a8cf-50eafcc10384 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage,/Users/jenkins/minikube-integr
ation/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000"
	I0816 06:14:34.414364    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:14:34.418142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 DEBUG: hyperkit: Pid is 5497
	I0816 06:14:34.418583    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 0
	I0816 06:14:34.418597    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:34.418714    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:34.419659    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:34.419718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:34.419730    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:34.419740    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:34.419749    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:34.419756    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:34.419770    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:34.419778    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:34.419784    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:34.419817    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:34.419830    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:34.419839    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:34.419848    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:34.419871    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:34.419887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:34.419897    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:34.419905    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:34.419913    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:34.419925    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:34.419937    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:34.425186    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:14:34.433329    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:14:34.434219    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:14:34.434239    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:14:34.434264    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:14:34.434287    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:14:34.809403    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:14:34.809419    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:14:34.924272    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:14:34.924298    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:14:34.924323    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:14:34.924337    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:14:34.925182    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:14:34.925207    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:14:36.420317    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 1
	I0816 06:14:36.420331    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:36.420403    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:36.421199    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:36.421258    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:36.421269    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:36.421312    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:36.421329    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:36.421338    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:36.421348    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:36.421365    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:36.421374    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:36.421383    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:36.421391    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:36.421405    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:36.421417    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:36.421425    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:36.421433    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:36.421440    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:36.421448    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:36.421456    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:36.421462    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:36.421476    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:38.422449    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 2
	I0816 06:14:38.422465    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:38.422545    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:38.423336    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:38.423409    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:38.423418    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:38.423427    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:38.423434    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:38.423442    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:38.423448    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:38.423455    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:38.423463    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:38.423481    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:38.423493    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:38.423502    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:38.423510    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:38.423517    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:38.423527    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:38.423535    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:38.423543    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:38.423550    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:38.423557    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:38.423575    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:40.299572    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:14:40.299714    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:14:40.299725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:14:40.319990    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:14:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:14:40.425665    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 3
	I0816 06:14:40.425692    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:40.425848    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:40.427286    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:40.427404    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:40.427429    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:40.427447    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:40.427460    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:40.427477    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:40.427491    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:40.427513    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:40.427533    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:40.427568    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:40.427582    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:40.427617    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:40.427626    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:40.427656    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:40.427734    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:40.427761    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:40.427807    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:40.427827    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:40.427839    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:40.427849    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:42.428564    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 4
	I0816 06:14:42.428579    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:42.428680    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:42.429470    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:42.429556    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:42.429569    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:42.429580    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:42.429589    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:42.429599    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:42.429608    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:42.429623    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:42.429632    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:42.429653    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:42.429666    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:42.429675    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:42.429683    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:42.429697    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:42.429708    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:42.429717    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:42.429725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:42.429732    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:42.429740    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:42.429767    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:44.431680    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 5
	I0816 06:14:44.431696    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:44.431762    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:44.432556    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:44.432585    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:44.432607    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:44.432622    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:44.432631    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:44.432651    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:44.432659    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:44.432666    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:44.432672    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:44.432678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:44.432685    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:44.432692    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:44.432699    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:44.432712    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:44.432720    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:44.432728    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:44.432734    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:44.432740    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:44.432753    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:44.432762    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:46.434754    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 6
	I0816 06:14:46.434769    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:46.434823    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:46.435675    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:46.435725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:46.435736    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:46.435746    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:46.435751    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:46.435759    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:46.435772    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:46.435780    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:46.435786    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:46.435792    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:46.435798    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:46.435806    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:46.435817    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:46.435825    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:46.435839    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:46.435852    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:46.435860    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:46.435868    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:46.435875    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:46.435883    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:48.437850    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 7
	I0816 06:14:48.437865    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:48.437939    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:48.438721    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:48.438752    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:48.438766    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:48.438784    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:48.438793    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:48.438802    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:48.438810    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:48.438819    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:48.438828    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:48.438835    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:48.438842    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:48.438851    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:48.438858    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:48.438875    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:48.438886    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:48.438892    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:48.438898    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:48.438911    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:48.438923    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:48.438930    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:50.440910    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 8
	I0816 06:14:50.440925    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:50.440990    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:50.441823    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:50.441866    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:50.441877    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:50.441886    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:50.441893    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:50.441902    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:50.441912    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:50.441926    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:50.441940    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:50.441947    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:50.441954    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:50.441968    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:50.441977    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:50.441988    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:50.441996    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:50.442003    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:50.442011    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:50.442018    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:50.442026    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:50.442042    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:52.443980    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 9
	I0816 06:14:52.443995    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:52.444067    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:52.444826    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:52.444874    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:52.444886    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:52.444906    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:52.444917    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:52.444925    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:52.444943    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:52.444952    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:52.444960    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:52.444966    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:52.444983    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:52.444995    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:52.445004    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:52.445010    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:52.445016    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:52.445023    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:52.445030    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:52.445042    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:52.445067    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:52.445075    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:54.446038    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 10
	I0816 06:14:54.446051    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:54.446143    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:54.446921    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:54.446974    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:54.446985    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:54.446994    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:54.447003    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:54.447018    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:54.447028    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:54.447043    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:54.447054    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:54.447069    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:54.447097    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:54.447106    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:54.447118    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:54.447127    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:54.447134    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:54.447142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:54.447153    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:54.447164    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:54.447173    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:54.447181    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:56.449108    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 11
	I0816 06:14:56.449121    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:56.449192    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:56.449982    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:56.450036    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:56.450047    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:56.450056    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:56.450064    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:56.450083    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:56.450091    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:56.450099    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:56.450108    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:56.450114    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:56.450124    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:56.450132    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:56.450140    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:56.450147    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:56.450155    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:56.450162    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:56.450169    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:56.450184    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:56.450198    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:56.450208    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:58.452202    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 12
	I0816 06:14:58.452215    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:58.452264    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:14:58.453106    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:14:58.453163    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:58.453174    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:58.453183    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:58.453192    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:58.453200    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:58.453206    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:58.453212    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:58.453218    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:58.453233    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:58.453245    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:58.453263    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:58.453271    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:58.453287    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:58.453298    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:58.453317    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:58.453330    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:58.453351    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:58.453363    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:58.453373    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:00.454626    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 13
	I0816 06:15:00.454643    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:00.454704    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:00.455491    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:00.455541    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:00.455551    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:00.455564    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:00.455577    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:00.455588    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:00.455596    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:00.455604    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:00.455613    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:00.455621    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:00.455629    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:00.455636    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:00.455642    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:00.455656    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:00.455678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:00.455686    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:00.455694    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:00.455701    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:00.455710    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:00.455718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:02.457646    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 14
	I0816 06:15:02.457659    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:02.457723    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:02.458506    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:02.458560    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:02.458570    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:02.458608    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:02.458620    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:02.458629    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:02.458638    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:02.458646    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:02.458654    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:02.458661    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:02.458667    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:02.458682    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:02.458695    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:02.458705    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:02.458716    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:02.458728    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:02.458736    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:02.458743    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:02.458751    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:02.458779    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:04.459900    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 15
	I0816 06:15:04.459913    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:04.459976    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:04.460763    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:04.460801    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:04.460813    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:04.460835    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:04.460848    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:04.460856    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:04.460863    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:04.460872    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:04.460879    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:04.460887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:04.460902    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:04.460914    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:04.460922    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:04.460928    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:04.460954    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:04.460978    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:04.460993    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:04.461017    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:04.461029    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:04.461037    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:06.461857    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 16
	I0816 06:15:06.461869    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:06.461944    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:06.462718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:06.462764    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:06.462772    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:06.462799    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:06.462810    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:06.462818    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:06.462825    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:06.462831    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:06.462838    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:06.462844    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:06.462852    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:06.462859    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:06.462866    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:06.462900    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:06.462910    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:06.462919    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:06.462927    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:06.462937    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:06.462943    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:06.462951    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:08.464906    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 17
	I0816 06:15:08.464919    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:08.464987    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:08.465929    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:08.465982    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:08.465997    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:08.466004    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:08.466012    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:08.466023    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:08.466033    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:08.466048    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:08.466060    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:08.466068    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:08.466076    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:08.466084    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:08.466092    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:08.466099    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:08.466107    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:08.466123    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:08.466131    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:08.466142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:08.466151    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:08.466167    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:10.466531    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 18
	I0816 06:15:10.466546    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:10.466606    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:10.467392    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:10.467462    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:10.467473    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:10.467481    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:10.467488    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:10.467497    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:10.467505    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:10.467512    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:10.467519    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:10.467532    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:10.467545    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:10.467554    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:10.467563    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:10.467577    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:10.467591    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:10.467600    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:10.467608    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:10.467615    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:10.467621    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:10.467628    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:12.468699    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 19
	I0816 06:15:12.468712    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:12.468766    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:12.469608    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:12.469651    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:12.469664    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:12.469674    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:12.469684    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:12.469694    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:12.469712    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:12.469719    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:12.469725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:12.469733    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:12.469740    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:12.469754    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:12.469761    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:12.469768    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:12.469776    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:12.469785    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:12.469793    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:12.469802    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:12.469810    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:12.469817    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:14.470676    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 20
	I0816 06:15:14.470691    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:14.470759    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:14.471538    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:14.471584    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:14.471594    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:14.471602    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:14.471608    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:14.471616    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:14.471622    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:14.471629    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:14.471635    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:14.471651    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:14.471658    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:14.471666    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:14.471674    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:14.471690    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:14.471702    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:14.471719    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:14.471735    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:14.471744    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:14.471752    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:14.471761    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:16.473070    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 21
	I0816 06:15:16.473083    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:16.473142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:16.473965    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:16.474012    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:16.474023    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:16.474036    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:16.474046    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:16.474055    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:16.474063    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:16.474071    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:16.474078    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:16.474084    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:16.474091    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:16.474098    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:16.474116    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:16.474130    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:16.474142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:16.474150    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:16.474159    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:16.474173    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:16.474193    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:16.474210    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:18.474677    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 22
	I0816 06:15:18.474690    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:18.474758    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:18.475652    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:18.475689    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:18.475696    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:18.475705    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:18.475711    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:18.475724    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:18.475737    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:18.475745    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:18.475754    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:18.475761    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:18.475770    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:18.475782    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:18.475791    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:18.475803    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:18.475812    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:18.475819    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:18.475827    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:18.475842    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:18.475855    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:18.475864    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:20.477835    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 23
	I0816 06:15:20.477850    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:20.477938    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:20.478776    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:20.478818    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:20.478829    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:20.478875    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:20.478887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:20.478910    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:20.478922    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:20.478937    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:20.478956    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:20.478969    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:20.478985    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:20.478993    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:20.479001    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:20.479007    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:20.479019    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:20.479031    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:20.479040    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:20.479048    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:20.479058    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:20.479066    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:22.480946    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 24
	I0816 06:15:22.480963    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:22.481015    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:22.481898    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:22.481954    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:22.481961    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:22.481976    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:22.481985    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:22.481992    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:22.481998    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:22.482011    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:22.482028    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:22.482035    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:22.482043    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:22.482059    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:22.482067    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:22.482073    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:22.482080    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:22.482094    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:22.482101    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:22.482107    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:22.482114    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:22.482122    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:24.483310    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 25
	I0816 06:15:24.483325    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:24.483409    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:24.484412    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:24.484462    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:24.484473    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:24.484489    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:24.484495    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:24.484502    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:24.484508    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:24.484515    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:24.484521    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:24.484528    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:24.484535    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:24.484540    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:24.484548    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:24.484558    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:24.484570    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:24.484586    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:24.484595    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:24.484601    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:24.484607    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:24.484626    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:26.486575    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 26
	I0816 06:15:26.486589    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:26.486679    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:26.487475    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:26.487537    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:26.487551    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:26.487566    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:26.487576    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:26.487586    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:26.487595    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:26.487603    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:26.487611    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:26.487630    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:26.487645    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:26.487662    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:26.487671    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:26.487678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:26.487685    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:26.487691    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:26.487699    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:26.487710    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:26.487718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:26.487730    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:28.489650    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 27
	I0816 06:15:28.489665    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:28.489731    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:28.490510    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:28.490567    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:28.490584    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:28.490592    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:28.490598    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:28.490605    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:28.490611    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:28.490618    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:28.490627    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:28.490633    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:28.490640    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:28.490646    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:28.490653    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:28.490665    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:28.490674    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:28.490689    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:28.490711    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:28.490726    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:28.490739    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:28.490749    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:30.491918    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 28
	I0816 06:15:30.491933    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:30.492010    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:30.492786    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:30.492849    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:30.492860    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:30.492892    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:30.492902    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:30.492909    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:30.492915    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:30.492924    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:30.492930    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:30.492937    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:30.492948    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:30.492958    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:30.492967    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:30.492979    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:30.492987    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:30.492997    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:30.493007    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:30.493015    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:30.493022    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:30.493033    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:32.494910    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 29
	I0816 06:15:32.494923    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:32.494995    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:32.495785    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for ae:e9:d3:8f:f9:8c in /var/db/dhcpd_leases ...
	I0816 06:15:32.495827    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:15:32.495839    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:15:32.495848    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:15:32.495855    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:15:32.495862    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:15:32.495869    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:15:32.495875    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:15:32.495884    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:15:32.495898    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:15:32.495911    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:15:32.495919    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:15:32.495941    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:15:32.495986    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:15:32.495995    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:15:32.496007    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:15:32.496015    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:15:32.496022    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:15:32.496029    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:15:32.496037    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:15:34.497539    5473 client.go:171] duration metric: took 1m1.00941658s to LocalClient.Create
	I0816 06:15:36.498859    5473 start.go:128] duration metric: took 1m3.064400438s to createHost
	I0816 06:15:36.498874    5473 start.go:83] releasing machines lock for "force-systemd-flag-222000", held for 1m3.064502487s
	W0816 06:15:36.498888    5473 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:e9:d3:8f:f9:8c
	I0816 06:15:36.499215    5473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:15:36.499244    5473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:15:36.508418    5473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53783
	I0816 06:15:36.508811    5473 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:15:36.509177    5473 main.go:141] libmachine: Using API Version  1
	I0816 06:15:36.509197    5473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:15:36.509403    5473 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:15:36.509769    5473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:15:36.509795    5473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:15:36.518092    5473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53785
	I0816 06:15:36.518468    5473 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:15:36.518797    5473 main.go:141] libmachine: Using API Version  1
	I0816 06:15:36.518808    5473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:15:36.519061    5473 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:15:36.519198    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .GetState
	I0816 06:15:36.519300    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.519362    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:36.520308    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .DriverName
	I0816 06:15:36.562428    5473 out.go:177] * Deleting "force-systemd-flag-222000" in hyperkit ...
	I0816 06:15:36.583313    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .Remove
	I0816 06:15:36.583465    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.583478    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.583582    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:36.584547    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:36.584623    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | waiting for graceful shutdown
	I0816 06:15:37.584771    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:37.584861    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:37.585804    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | waiting for graceful shutdown
	I0816 06:15:38.587502    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:38.587627    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:38.589354    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | waiting for graceful shutdown
	I0816 06:15:39.589607    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:39.589710    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:39.590381    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | waiting for graceful shutdown
	I0816 06:15:40.590649    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:40.590716    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:40.591309    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | waiting for graceful shutdown
	I0816 06:15:41.591673    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:15:41.591767    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5497
	I0816 06:15:41.592841    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | sending sigkill
	I0816 06:15:41.592853    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 06:15:41.604631    5473 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:e9:d3:8f:f9:8c
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:e9:d3:8f:f9:8c
	I0816 06:15:41.604646    5473 start.go:729] Will try again in 5 seconds ...
	I0816 06:15:41.615114    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:15:41 WARN : hyperkit: failed to read stderr: EOF
	I0816 06:15:41.615138    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:15:41 WARN : hyperkit: failed to read stdout: EOF
	I0816 06:15:46.606410    5473 start.go:360] acquireMachinesLock for force-systemd-flag-222000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:16:39.358585    5473 start.go:364] duration metric: took 52.753557186s to acquireMachinesLock for "force-systemd-flag-222000"
	I0816 06:16:39.358607    5473 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-222000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-222000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:16:39.358675    5473 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:16:39.380112    5473 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:16:39.380199    5473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:16:39.380214    5473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:16:39.388566    5473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53793
	I0816 06:16:39.388898    5473 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:16:39.389255    5473 main.go:141] libmachine: Using API Version  1
	I0816 06:16:39.389270    5473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:16:39.389519    5473 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:16:39.389645    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .GetMachineName
	I0816 06:16:39.389755    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .DriverName
	I0816 06:16:39.389867    5473 start.go:159] libmachine.API.Create for "force-systemd-flag-222000" (driver="hyperkit")
	I0816 06:16:39.389887    5473 client.go:168] LocalClient.Create starting
	I0816 06:16:39.389917    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:16:39.389968    5473 main.go:141] libmachine: Decoding PEM data...
	I0816 06:16:39.389989    5473 main.go:141] libmachine: Parsing certificate...
	I0816 06:16:39.390029    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:16:39.390066    5473 main.go:141] libmachine: Decoding PEM data...
	I0816 06:16:39.390078    5473 main.go:141] libmachine: Parsing certificate...
	I0816 06:16:39.390090    5473 main.go:141] libmachine: Running pre-create checks...
	I0816 06:16:39.390096    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .PreCreateCheck
	I0816 06:16:39.390172    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.390211    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .GetConfigRaw
	I0816 06:16:39.423994    5473 main.go:141] libmachine: Creating machine...
	I0816 06:16:39.424003    5473 main.go:141] libmachine: (force-systemd-flag-222000) Calling .Create
	I0816 06:16:39.424106    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:39.424234    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:16:39.424100    5537 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:16:39.424299    5473 main.go:141] libmachine: (force-systemd-flag-222000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:16:39.646236    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:16:39.646140    5537 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/id_rsa...
	I0816 06:16:39.763304    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:16:39.763218    5537 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk...
	I0816 06:16:39.763314    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Writing magic tar header
	I0816 06:16:39.763325    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Writing SSH key tar header
	I0816 06:16:39.763975    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | I0816 06:16:39.763891    5537 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000 ...
	I0816 06:16:40.143500    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:40.143521    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid
	I0816 06:16:40.143532    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Using UUID 293eb23a-2b41-4303-989e-93ab5dfa285b
	I0816 06:16:40.169874    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Generated MAC 6a:15:6b:27:49:92
	I0816 06:16:40.169895    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000
	I0816 06:16:40.169946    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"293eb23a-2b41-4303-989e-93ab5dfa285b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:16:40.169980    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"293eb23a-2b41-4303-989e-93ab5dfa285b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:16:40.170028    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "293eb23a-2b41-4303-989e-93ab5dfa285b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/fo
rce-systemd-flag-222000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000"}
	I0816 06:16:40.170076    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 293eb23a-2b41-4303-989e-93ab5dfa285b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/force-systemd-flag-222000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/bzimage,/Users/jenkins/minikube-integr
ation/19423-1009/.minikube/machines/force-systemd-flag-222000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-222000"
	I0816 06:16:40.170087    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:16:40.173053    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 DEBUG: hyperkit: Pid is 5538
	I0816 06:16:40.173477    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 0
	I0816 06:16:40.173493    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:40.173624    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:40.174591    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:40.174678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:40.174709    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:40.174725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:40.174758    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:40.174778    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:40.174804    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:40.174825    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:40.174850    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:40.174862    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:40.174869    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:40.174877    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:40.174899    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:40.174918    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:40.174935    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:40.174965    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:40.174982    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:40.175007    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:40.175016    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:40.175025    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:40.180540    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:16:40.188537    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-flag-222000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:16:40.189441    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:16:40.189457    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:16:40.189471    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:16:40.189480    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:16:40.566627    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:16:40.566643    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:16:40.681152    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:16:40.681192    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:16:40.681236    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:16:40.681279    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:16:40.682029    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:16:40.682041    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:16:42.176282    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 1
	I0816 06:16:42.176307    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:42.176349    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:42.177173    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:42.177228    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:42.177244    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:42.177257    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:42.177269    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:42.177281    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:42.177294    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:42.177306    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:42.177313    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:42.177324    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:42.177342    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:42.177351    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:42.177362    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:42.177370    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:42.177398    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:42.177412    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:42.177419    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:42.177428    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:42.177441    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:42.177449    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:44.178213    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 2
	I0816 06:16:44.178233    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:44.178297    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:44.179087    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:44.179140    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:44.179156    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:44.179174    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:44.179186    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:44.179201    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:44.179211    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:44.179229    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:44.179242    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:44.179250    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:44.179273    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:44.179287    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:44.179312    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:44.179320    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:44.179328    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:44.179336    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:44.179343    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:44.179351    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:44.179359    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:44.179367    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:46.049703    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:16:46.049808    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:16:46.049816    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:16:46.070443    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | 2024/08/16 06:16:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:16:46.180650    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 3
	I0816 06:16:46.180675    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:46.180912    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:46.182417    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:46.182542    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:46.182583    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:46.182598    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:46.182619    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:46.182645    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:46.182658    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:46.182669    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:46.182676    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:46.182684    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:46.182695    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:46.182711    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:46.182721    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:46.182729    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:46.182743    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:46.182757    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:46.182774    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:46.182788    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:46.182798    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:46.182808    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:48.183301    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 4
	I0816 06:16:48.183316    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:48.183435    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:48.184243    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:48.184306    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:48.184314    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:48.184338    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:48.184349    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:48.184360    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:48.184368    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:48.184375    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:48.184384    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:48.184400    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:48.184415    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:48.184428    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:48.184438    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:48.184446    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:48.184457    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:48.184466    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:48.184473    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:48.184481    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:48.184488    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:48.184497    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:50.186471    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 5
	I0816 06:16:50.186488    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:50.186534    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:50.187690    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:50.187718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:50.187728    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:50.187746    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:50.187759    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:50.187768    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:50.187780    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:50.187788    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:50.187806    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:50.187815    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:50.187824    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:50.187832    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:50.187845    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:50.187863    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:50.187874    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:50.187883    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:50.187891    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:50.187898    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:50.187907    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:50.187915    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:52.189847    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 6
	I0816 06:16:52.189859    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:52.189942    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:52.190725    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:52.190765    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:52.190773    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:52.190786    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:52.190795    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:52.190816    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:52.190825    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:52.190833    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:52.190842    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:52.190859    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:52.190872    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:52.190881    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:52.190887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:52.190905    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:52.190916    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:52.190925    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:52.190933    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:52.190940    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:52.190947    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:52.190956    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:54.191640    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 7
	I0816 06:16:54.191653    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:54.191732    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:54.192563    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:54.192603    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:54.192612    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:54.192625    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:54.192634    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:54.192641    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:54.192648    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:54.192673    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:54.192684    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:54.192706    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:54.192720    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:54.192729    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:54.192737    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:54.192745    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:54.192754    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:54.192762    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:54.192770    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:54.192778    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:54.192787    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:54.192796    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:56.192754    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 8
	I0816 06:16:56.192767    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:56.192829    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:56.193666    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:56.193708    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:56.193721    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:56.193731    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:56.193741    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:56.193767    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:56.193779    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:56.193788    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:56.193795    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:56.193803    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:56.193809    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:56.193817    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:56.193831    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:56.193838    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:56.193848    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:56.193856    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:56.193866    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:56.193874    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:56.193895    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:56.193905    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:16:58.194950    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 9
	I0816 06:16:58.194965    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:16:58.195040    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:16:58.195867    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:16:58.195956    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:16:58.195967    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:16:58.195975    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:16:58.195981    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:16:58.195988    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:16:58.195993    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:16:58.196000    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:16:58.196012    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:16:58.196019    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:16:58.196025    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:16:58.196030    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:16:58.196045    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:16:58.196064    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:16:58.196076    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:16:58.196084    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:16:58.196092    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:16:58.196100    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:16:58.196109    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:16:58.196150    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:00.196059    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 10
	I0816 06:17:00.196078    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:00.196125    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:00.197274    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:00.197317    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:00.197331    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:00.197351    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:00.197362    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:00.197370    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:00.197378    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:00.197385    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:00.197393    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:00.197400    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:00.197406    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:00.197413    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:00.197420    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:00.197436    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:00.197448    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:00.197456    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:00.197463    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:00.197471    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:00.197479    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:00.197493    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:02.199404    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 11
	I0816 06:17:02.199416    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:02.199488    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:02.200280    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:02.200317    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:02.200333    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:02.200348    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:02.200358    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:02.200364    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:02.200370    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:02.200378    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:02.200385    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:02.200392    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:02.200398    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:02.200405    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:02.200414    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:02.200421    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:02.200429    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:02.200438    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:02.200444    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:02.200458    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:02.200471    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:02.200487    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:04.201595    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 12
	I0816 06:17:04.201605    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:04.201664    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:04.202449    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:04.202497    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:04.202508    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:04.202521    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:04.202528    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:04.202542    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:04.202551    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:04.202558    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:04.202565    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:04.202573    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:04.202579    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:04.202588    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:04.202596    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:04.202611    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:04.202623    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:04.202640    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:04.202652    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:04.202666    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:04.202678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:04.202696    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:06.203604    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 13
	I0816 06:17:06.203614    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:06.203665    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:06.204459    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:06.204502    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:06.204520    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:06.204536    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:06.204548    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:06.204560    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:06.204567    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:06.204574    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:06.204587    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:06.204597    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:06.204606    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:06.204614    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:06.204621    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:06.204629    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:06.204646    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:06.204659    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:06.204670    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:06.204678    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:06.204685    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:06.204699    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:08.206416    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 14
	I0816 06:17:08.206426    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:08.206477    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:08.207328    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:08.207375    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:08.207385    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:08.207394    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:08.207403    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:08.207420    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:08.207453    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:08.207464    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:08.207475    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:08.207483    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:08.207498    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:08.207507    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:08.207514    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:08.207523    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:08.207534    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:08.207544    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:08.207552    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:08.207560    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:08.207573    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:08.207585    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:10.208397    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 15
	I0816 06:17:10.208413    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:10.208479    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:10.209315    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:10.209362    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:10.209374    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:10.209383    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:10.209394    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:10.209407    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:10.209419    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:10.209441    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:10.209453    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:10.209461    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:10.209470    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:10.209486    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:10.209497    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:10.209506    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:10.209514    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:10.209522    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:10.209533    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:10.209542    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:10.209552    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:10.209559    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:12.211478    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 16
	I0816 06:17:12.211491    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:12.211617    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:12.212510    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:12.212562    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:12.212573    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:12.212581    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:12.212587    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:12.212596    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:12.212603    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:12.212610    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:12.212616    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:12.212622    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:12.212630    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:12.212652    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:12.212667    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:12.212676    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:12.212696    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:12.212715    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:12.212727    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:12.212737    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:12.212743    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:12.212749    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:14.213889    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 17
	I0816 06:17:14.213912    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:14.213960    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:14.214847    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:14.214887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:14.214903    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:14.214914    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:14.214922    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:14.214944    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:14.214957    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:14.214965    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:14.214974    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:14.214981    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:14.214988    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:14.214995    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:14.215002    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:14.215018    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:14.215030    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:14.215042    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:14.215066    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:14.215101    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:14.215107    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:14.215122    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:16.216076    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 18
	I0816 06:17:16.216087    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:16.216147    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:16.217022    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:16.217073    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:16.217082    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:16.217091    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:16.217096    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:16.217106    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:16.217115    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:16.217122    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:16.217142    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:16.217152    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:16.217162    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:16.217170    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:16.217177    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:16.217187    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:16.217195    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:16.217201    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:16.217209    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:16.217221    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:16.217229    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:16.217248    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:18.219219    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 19
	I0816 06:17:18.219232    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:18.219259    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:18.220194    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:18.220242    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:18.220259    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:18.220286    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:18.220295    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:18.220302    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:18.220311    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:18.220317    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:18.220325    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:18.220339    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:18.220350    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:18.220358    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:18.220365    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:18.220371    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:18.220379    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:18.220385    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:18.220392    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:18.220407    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:18.220418    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:18.220435    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:20.220513    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 20
	I0816 06:17:20.220525    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:20.220595    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:20.221594    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:20.221651    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:20.221661    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:20.221673    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:20.221685    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:20.221692    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:20.221698    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:20.221712    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:20.221721    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:20.221729    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:20.221752    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:20.221771    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:20.221779    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:20.221787    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:20.221796    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:20.221813    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:20.221827    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:20.221835    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:20.221842    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:20.221850    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:22.223738    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 21
	I0816 06:17:22.223760    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:22.223811    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:22.224596    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:22.224622    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:22.224630    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:22.224639    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:22.224646    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:22.224653    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:22.224661    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:22.224668    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:22.224688    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:22.224707    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:22.224719    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:22.224727    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:22.224739    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:22.224748    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:22.224755    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:22.224764    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:22.224770    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:22.224777    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:22.224796    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:22.224808    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:24.225665    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 22
	I0816 06:17:24.225679    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:24.225752    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:24.226590    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:24.226657    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:24.226670    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:24.226680    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:24.226687    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:24.226693    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:24.226709    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:24.226718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:24.226731    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:24.226741    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:24.226749    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:24.226757    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:24.226765    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:24.226773    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:24.226781    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:24.226788    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:24.226798    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:24.226809    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:24.226817    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:24.226826    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:26.228736    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 23
	I0816 06:17:26.228749    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:26.228801    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:26.229773    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:26.229818    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:26.229833    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:26.229854    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:26.229862    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:26.229868    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:26.229875    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:26.229882    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:26.229890    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:26.229900    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:26.229908    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:26.229915    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:26.229924    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:26.229938    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:26.229950    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:26.229960    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:26.229967    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:26.229974    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:26.229982    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:26.229990    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:28.231934    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 24
	I0816 06:17:28.231948    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:28.232041    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:28.232812    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:28.232867    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:28.232878    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:28.232885    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:28.232891    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:28.232921    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:28.232933    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:28.232955    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:28.232968    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:28.232978    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:28.232986    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:28.232993    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:28.233002    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:28.233009    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:28.233014    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:28.233022    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:28.233034    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:28.233043    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:28.233053    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:28.233074    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:30.235072    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 25
	I0816 06:17:30.235082    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:30.235139    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:30.235950    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:30.235997    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:30.236007    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:30.236017    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:30.236024    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:30.236030    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:30.236037    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:30.236044    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:30.236053    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:30.236060    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:30.236077    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:30.236087    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:30.236093    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:30.236099    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:30.236107    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:30.236114    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:30.236122    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:30.236128    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:30.236138    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:30.236146    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:32.238110    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 26
	I0816 06:17:32.238121    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:32.238191    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:32.239164    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:32.239209    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:32.239220    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:32.239239    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:32.239249    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:32.239256    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:32.239263    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:32.239270    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:32.239277    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:32.239285    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:32.239293    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:32.239298    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:32.239311    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:32.239323    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:32.239331    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:32.239339    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:32.239347    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:32.239357    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:32.239365    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:32.239373    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:34.239997    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 27
	I0816 06:17:34.240010    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:34.240108    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:34.240889    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:34.240951    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:34.240962    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:34.240981    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:34.240993    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:34.241002    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:34.241016    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:34.241025    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:34.241040    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:34.241048    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:34.241056    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:34.241066    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:34.241074    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:34.241088    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:34.241100    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:34.241111    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:34.241119    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:34.241128    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:34.241136    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:34.241145    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:36.241766    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 28
	I0816 06:17:36.241779    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:36.242220    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:36.242654    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:36.242742    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:36.242752    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:36.242760    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:36.242770    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:36.242779    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:36.242789    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:36.242825    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:36.242832    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:36.242841    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:36.242849    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:36.242857    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:36.242873    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:36.242881    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:36.242887    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:36.242955    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:36.242990    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:36.243005    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:36.243030    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:36.243042    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:38.244861    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Attempt 29
	I0816 06:17:38.244878    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:17:38.244942    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | hyperkit pid from json: 5538
	I0816 06:17:38.245718    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Searching for 6a:15:6b:27:49:92 in /var/db/dhcpd_leases ...
	I0816 06:17:38.245801    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:17:38.245823    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:17:38.245840    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:17:38.245852    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:17:38.245860    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:17:38.245869    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:17:38.245876    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:17:38.245884    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:17:38.245891    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:17:38.245899    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:17:38.245907    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:17:38.245914    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:17:38.245956    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:17:38.245966    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:17:38.245973    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:17:38.245983    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:17:38.245994    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:17:38.246002    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:17:38.246041    5473 main.go:141] libmachine: (force-systemd-flag-222000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:17:40.247869    5473 client.go:171] duration metric: took 1m0.85960147s to LocalClient.Create
	I0816 06:17:42.248002    5473 start.go:128] duration metric: took 1m2.890995047s to createHost
	I0816 06:17:42.248017    5473 start.go:83] releasing machines lock for "force-systemd-flag-222000", held for 1m2.891101539s
	W0816 06:17:42.248108    5473 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-222000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:15:6b:27:49:92
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-222000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:15:6b:27:49:92
	I0816 06:17:42.311318    5473 out.go:201] 
	W0816 06:17:42.332449    5473 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:15:6b:27:49:92
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:15:6b:27:49:92
	W0816 06:17:42.332459    5473 out.go:270] * 
	* 
	W0816 06:17:42.333066    5473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:17:42.394416    5473 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-222000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-222000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-222000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (181.686636ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-222000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-222000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-16 06:17:42.757489 -0700 PDT m=+3489.453828665
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-222000 -n force-systemd-flag-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-222000 -n force-systemd-flag-222000: exit status 7 (79.369415ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:17:42.834911    5546 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:17:42.834930    5546 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-222000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-222000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-222000: (5.256615731s)
--- FAIL: TestForceSystemdFlag (252.03s)

                                                
                                    
x
+
TestForceSystemdEnv (233.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-603000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0816 06:10:55.987914    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:12:52.907495    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:13:27.953853    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-603000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.072045476s)

                                                
                                                
-- stdout --
	* [force-systemd-env-603000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-603000" primary control-plane node in "force-systemd-env-603000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-603000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:10:45.614198    5419 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:10:45.614459    5419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:10:45.614464    5419 out.go:358] Setting ErrFile to fd 2...
	I0816 06:10:45.614468    5419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:10:45.614635    5419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:10:45.616172    5419 out.go:352] Setting JSON to false
	I0816 06:10:45.639574    5419 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3623,"bootTime":1723810222,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:10:45.639662    5419 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:10:45.661680    5419 out.go:177] * [force-systemd-env-603000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:10:45.703194    5419 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:10:45.703226    5419 notify.go:220] Checking for updates...
	I0816 06:10:45.745066    5419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:10:45.768206    5419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:10:45.789270    5419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:10:45.831251    5419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:10:45.852230    5419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0816 06:10:45.873453    5419 config.go:182] Loaded profile config "offline-docker-266000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:10:45.873530    5419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:10:45.902351    5419 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 06:10:45.943913    5419 start.go:297] selected driver: hyperkit
	I0816 06:10:45.943926    5419 start.go:901] validating driver "hyperkit" against <nil>
	I0816 06:10:45.943935    5419 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:10:45.946764    5419 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:10:45.946873    5419 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:10:45.955144    5419 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:10:45.959039    5419 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:10:45.959064    5419 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:10:45.959098    5419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 06:10:45.959309    5419 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 06:10:45.959356    5419 cni.go:84] Creating CNI manager for ""
	I0816 06:10:45.959372    5419 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 06:10:45.959379    5419 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 06:10:45.959441    5419 start.go:340] cluster config:
	{Name:force-systemd-env-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:10:45.959524    5419 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:10:45.980278    5419 out.go:177] * Starting "force-systemd-env-603000" primary control-plane node in "force-systemd-env-603000" cluster
	I0816 06:10:46.001188    5419 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:10:46.001213    5419 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:10:46.001225    5419 cache.go:56] Caching tarball of preloaded images
	I0816 06:10:46.001313    5419 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:10:46.001321    5419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:10:46.001390    5419 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/force-systemd-env-603000/config.json ...
	I0816 06:10:46.001406    5419 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/force-systemd-env-603000/config.json: {Name:mkced7da0ba0720607e56e9c88fa1b9141031be9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:10:46.001685    5419 start.go:360] acquireMachinesLock for force-systemd-env-603000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:11:24.682080    5419 start.go:364] duration metric: took 38.681403188s to acquireMachinesLock for "force-systemd-env-603000"
	I0816 06:11:24.682118    5419 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:11:24.682175    5419 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:11:24.703766    5419 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:11:24.703915    5419 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:11:24.703950    5419 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:11:24.712666    5419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53747
	I0816 06:11:24.713050    5419 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:11:24.713553    5419 main.go:141] libmachine: Using API Version  1
	I0816 06:11:24.713570    5419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:11:24.713792    5419 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:11:24.713902    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .GetMachineName
	I0816 06:11:24.713988    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .DriverName
	I0816 06:11:24.714092    5419 start.go:159] libmachine.API.Create for "force-systemd-env-603000" (driver="hyperkit")
	I0816 06:11:24.714126    5419 client.go:168] LocalClient.Create starting
	I0816 06:11:24.714157    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:11:24.714207    5419 main.go:141] libmachine: Decoding PEM data...
	I0816 06:11:24.714222    5419 main.go:141] libmachine: Parsing certificate...
	I0816 06:11:24.714275    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:11:24.714315    5419 main.go:141] libmachine: Decoding PEM data...
	I0816 06:11:24.714329    5419 main.go:141] libmachine: Parsing certificate...
	I0816 06:11:24.714341    5419 main.go:141] libmachine: Running pre-create checks...
	I0816 06:11:24.714348    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .PreCreateCheck
	I0816 06:11:24.714426    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.714599    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .GetConfigRaw
	I0816 06:11:24.726635    5419 main.go:141] libmachine: Creating machine...
	I0816 06:11:24.726647    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .Create
	I0816 06:11:24.726744    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:24.726867    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:11:24.726729    5435 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:11:24.726907    5419 main.go:141] libmachine: (force-systemd-env-603000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:11:24.932695    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:11:24.932605    5435 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/id_rsa...
	I0816 06:11:25.058403    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:11:25.058282    5435 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk...
	I0816 06:11:25.058426    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Writing magic tar header
	I0816 06:11:25.058442    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Writing SSH key tar header
	I0816 06:11:25.058977    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:11:25.058937    5435 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000 ...
	I0816 06:11:25.436702    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:25.436723    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid
	I0816 06:11:25.436784    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Using UUID 8313b30e-2a06-4371-a24c-aef5643b2933
	I0816 06:11:25.462210    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Generated MAC 6a:94:31:47:f4:13
	I0816 06:11:25.462225    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000
	I0816 06:11:25.462260    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8313b30e-2a06-4371-a24c-aef5643b2933", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:11:25.462299    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8313b30e-2a06-4371-a24c-aef5643b2933", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:11:25.462355    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8313b30e-2a06-4371-a24c-aef5643b2933", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-sys
temd-env-603000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000"}
	I0816 06:11:25.462404    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8313b30e-2a06-4371-a24c-aef5643b2933 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage,/Users/jenkins/minikube-integration/19
423-1009/.minikube/machines/force-systemd-env-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000"
	I0816 06:11:25.462449    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:11:25.465304    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 DEBUG: hyperkit: Pid is 5436
	I0816 06:11:25.465752    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 0
	I0816 06:11:25.465770    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:25.465879    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:25.466784    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:25.466860    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:25.466874    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:25.466888    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:25.466899    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:25.466926    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:25.466943    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:25.466969    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:25.466980    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:25.466993    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:25.467008    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:25.467034    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:25.467050    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:25.467081    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:25.467091    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:25.467107    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:25.467128    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:25.467139    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:25.467151    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:25.467172    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:25.472960    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:11:25.481008    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:11:25.482012    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:11:25.482030    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:11:25.482061    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:11:25.482079    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:11:25.858736    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:11:25.858750    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:11:25.973397    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:11:25.973417    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:11:25.973433    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:11:25.973443    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:11:25.974311    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:11:25.974329    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:25 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:11:27.468716    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 1
	I0816 06:11:27.468736    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:27.468798    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:27.469598    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:27.469631    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:27.469643    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:27.469667    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:27.469686    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:27.469697    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:27.469706    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:27.469718    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:27.469726    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:27.469735    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:27.469743    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:27.469752    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:27.469759    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:27.469767    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:27.469773    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:27.469790    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:27.469799    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:27.469807    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:27.469813    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:27.469821    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:29.470794    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 2
	I0816 06:11:29.470808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:29.470872    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:29.471670    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:29.471741    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:29.471756    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:29.471764    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:29.471770    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:29.471780    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:29.471786    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:29.471792    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:29.471798    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:29.471805    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:29.471813    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:29.471820    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:29.471827    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:29.471849    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:29.471858    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:29.471866    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:29.471886    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:29.471904    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:29.471924    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:29.471933    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:31.370108    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:11:31.370319    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:11:31.370354    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:11:31.391694    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:11:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:11:31.473164    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 3
	I0816 06:11:31.473193    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:31.473368    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:31.474777    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:31.474893    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:31.474918    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:31.474948    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:31.475000    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:31.475015    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:31.475047    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:31.475075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:31.475105    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:31.475133    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:31.475144    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:31.475164    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:31.475174    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:31.475194    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:31.475210    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:31.475221    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:31.475231    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:31.475243    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:31.475255    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:31.475266    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:33.475436    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 4
	I0816 06:11:33.475454    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:33.475609    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:33.476460    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:33.476519    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:33.476535    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:33.476566    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:33.476580    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:33.476592    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:33.476600    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:33.476608    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:33.476615    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:33.476631    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:33.476642    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:33.476659    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:33.476672    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:33.476680    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:33.476689    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:33.476696    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:33.476703    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:33.476710    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:33.476717    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:33.476729    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:35.478668    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 5
	I0816 06:11:35.478684    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:35.478733    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:35.479512    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:35.479560    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:35.479572    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:35.479587    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:35.479595    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:35.479608    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:35.479617    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:35.479626    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:35.479636    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:35.479648    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:35.479663    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:35.479671    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:35.479688    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:35.479702    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:35.479713    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:35.479721    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:35.479729    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:35.479745    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:35.479754    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:35.479763    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:37.479880    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 6
	I0816 06:11:37.479896    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:37.479931    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:37.480805    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:37.480856    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:37.480869    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:37.480879    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:37.480890    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:37.480898    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:37.480908    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:37.480917    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:37.480925    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:37.480931    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:37.480940    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:37.480947    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:37.480956    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:37.480962    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:37.480971    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:37.480980    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:37.480993    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:37.481001    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:37.481015    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:37.481038    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:39.482993    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 7
	I0816 06:11:39.483007    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:39.483075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:39.484068    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:39.484121    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:39.484137    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:39.484154    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:39.484165    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:39.484174    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:39.484184    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:39.484193    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:39.484199    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:39.484205    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:39.484212    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:39.484225    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:39.484234    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:39.484243    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:39.484251    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:39.484262    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:39.484270    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:39.484277    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:39.484284    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:39.484293    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:41.485637    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 8
	I0816 06:11:41.485650    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:41.485730    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:41.486493    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:41.486550    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:41.486572    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:41.486585    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:41.486591    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:41.486599    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:41.486610    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:41.486618    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:41.486627    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:41.486644    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:41.486659    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:41.486668    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:41.486677    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:41.486684    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:41.486692    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:41.486703    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:41.486711    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:41.486721    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:41.486728    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:41.486737    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:43.487767    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 9
	I0816 06:11:43.487783    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:43.487878    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:43.488651    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:43.488715    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:43.488723    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:43.488734    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:43.488748    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:43.488755    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:43.488761    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:43.488786    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:43.488807    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:43.488819    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:43.488836    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:43.488848    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:43.488855    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:43.488871    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:43.488879    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:43.488886    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:43.488894    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:43.488904    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:43.488911    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:43.488921    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:45.490749    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 10
	I0816 06:11:45.490761    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:45.490835    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:45.491641    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:45.491654    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:45.491660    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:45.491668    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:45.491674    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:45.491680    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:45.491687    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:45.491694    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:45.491700    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:45.491716    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:45.491737    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:45.491751    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:45.491760    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:45.491776    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:45.491788    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:45.491809    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:45.491823    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:45.491832    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:45.491841    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:45.491849    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:47.492326    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 11
	I0816 06:11:47.492341    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:47.492397    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:47.493188    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:47.493221    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:47.493230    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:47.493239    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:47.493248    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:47.493263    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:47.493277    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:47.493288    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:47.493304    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:47.493315    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:47.493322    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:47.493330    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:47.493338    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:47.493351    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:47.493359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:47.493367    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:47.493375    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:47.493384    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:47.493391    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:47.493400    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:49.494351    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 12
	I0816 06:11:49.494366    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:49.494440    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:49.495223    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:49.495277    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:49.495308    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:49.495315    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:49.495323    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:49.495333    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:49.495340    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:49.495349    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:49.495355    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:49.495361    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:49.495372    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:49.495381    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:49.495393    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:49.495403    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:49.495416    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:49.495423    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:49.495449    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:49.495462    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:49.495469    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:49.495475    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:51.497560    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 13
	I0816 06:11:51.497575    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:51.497674    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:51.498511    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:51.498566    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:51.498581    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:51.498612    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:51.498625    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:51.498635    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:51.498643    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:51.498649    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:51.498656    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:51.498663    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:51.498669    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:51.498682    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:51.498694    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:51.498713    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:51.498722    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:51.498730    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:51.498738    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:51.498745    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:51.498753    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:51.498770    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:53.500719    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 14
	I0816 06:11:53.500737    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:53.500778    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:53.501621    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:53.501665    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:53.501676    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:53.501686    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:53.501691    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:53.501699    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:53.501704    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:53.501711    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:53.501718    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:53.501730    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:53.501739    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:53.501754    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:53.501766    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:53.501773    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:53.501781    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:53.501789    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:53.501798    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:53.501804    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:53.501812    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:53.501820    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:55.501996    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 15
	I0816 06:11:55.502013    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:55.502072    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:55.502908    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:55.502971    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:55.502982    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:55.502991    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:55.502999    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:55.503006    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:55.503015    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:55.503022    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:55.503027    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:55.503035    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:55.503047    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:55.503054    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:55.503063    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:55.503070    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:55.503078    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:55.503093    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:55.503102    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:55.503109    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:55.503117    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:55.503133    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:57.504789    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 16
	I0816 06:11:57.504802    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:57.504877    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:57.505660    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:57.505711    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:57.505723    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:57.505732    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:57.505738    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:57.505745    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:57.505752    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:57.505759    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:57.505765    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:57.505771    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:57.505785    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:57.505794    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:57.505813    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:57.505829    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:57.505850    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:57.505864    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:57.505872    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:57.505880    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:57.505893    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:57.505902    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:11:59.506274    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 17
	I0816 06:11:59.506291    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:11:59.506359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:11:59.507177    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:11:59.507222    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:11:59.507245    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:11:59.507258    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:11:59.507279    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:11:59.507288    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:11:59.507295    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:11:59.507301    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:11:59.507308    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:11:59.507314    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:11:59.507325    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:11:59.507342    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:11:59.507351    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:11:59.507359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:11:59.507367    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:11:59.507375    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:11:59.507389    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:11:59.507406    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:11:59.507418    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:11:59.507437    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:01.507671    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 18
	I0816 06:12:01.507687    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:01.507748    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:01.508574    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:01.508617    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:01.508632    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:01.508641    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:01.508647    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:01.508673    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:01.508688    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:01.508699    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:01.508707    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:01.508715    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:01.508724    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:01.508732    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:01.508740    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:01.508747    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:01.508766    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:01.508772    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:01.508787    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:01.508808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:01.508816    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:01.508826    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:03.509241    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 19
	I0816 06:12:03.509257    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:03.509324    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:03.510119    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:03.510182    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:03.510192    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:03.510199    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:03.510232    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:03.510245    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:03.510254    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:03.510264    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:03.510281    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:03.510294    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:03.510308    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:03.510316    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:03.510324    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:03.510332    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:03.510340    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:03.510354    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:03.510369    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:03.510377    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:03.510386    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:03.510395    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:05.512290    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 20
	I0816 06:12:05.512305    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:05.512370    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:05.513149    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:05.513202    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:05.513212    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:05.513227    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:05.513238    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:05.513256    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:05.513264    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:05.513277    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:05.513287    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:05.513294    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:05.513302    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:05.513310    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:05.513318    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:05.513324    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:05.513332    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:05.513347    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:05.513357    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:05.513374    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:05.513386    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:05.513396    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:07.515303    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 21
	I0816 06:12:07.515319    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:07.515383    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:07.516181    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:07.516226    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:07.516236    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:07.516246    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:07.516252    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:07.516259    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:07.516276    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:07.516289    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:07.516298    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:07.516306    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:07.516315    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:07.516332    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:07.516341    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:07.516350    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:07.516359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:07.516372    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:07.516385    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:07.516393    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:07.516401    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:07.516410    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:09.516699    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 22
	I0816 06:12:09.516714    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:09.516787    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:09.517576    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:09.517656    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:09.517670    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:09.517685    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:09.517692    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:09.517700    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:09.517708    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:09.517714    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:09.517723    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:09.517738    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:09.517752    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:09.517760    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:09.517769    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:09.517782    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:09.517792    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:09.517800    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:09.517808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:09.517822    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:09.517831    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:09.517839    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:11.518199    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 23
	I0816 06:12:11.518213    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:11.518296    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:11.519109    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:11.519172    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:11.519184    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:11.519193    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:11.519201    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:11.519207    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:11.519213    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:11.519220    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:11.519229    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:11.519236    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:11.519244    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:11.519251    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:11.519257    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:11.519265    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:11.519272    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:11.519279    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:11.519288    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:11.519303    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:11.519316    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:11.519325    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:13.520284    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 24
	I0816 06:12:13.520300    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:13.520424    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:13.521186    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:13.521234    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:13.521244    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:13.521259    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:13.521274    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:13.521285    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:13.521295    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:13.521302    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:13.521310    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:13.521317    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:13.521325    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:13.521332    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:13.521339    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:13.521345    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:13.521351    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:13.521356    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:13.521363    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:13.521370    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:13.521376    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:13.521383    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:15.523346    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 25
	I0816 06:12:15.523358    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:15.523425    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:15.524436    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:15.524490    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:15.524505    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:15.524519    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:15.524532    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:15.524540    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:15.524548    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:15.524556    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:15.524564    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:15.524571    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:15.524578    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:15.524593    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:15.524599    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:15.524606    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:15.524614    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:15.524623    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:15.524635    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:15.524644    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:15.524652    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:15.524661    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:17.526594    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 26
	I0816 06:12:17.526606    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:17.526650    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:17.527475    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:17.527524    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:17.527537    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:17.527544    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:17.527551    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:17.527572    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:17.527580    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:17.527588    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:17.527603    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:17.527611    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:17.527622    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:17.527631    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:17.527638    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:17.527646    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:17.527653    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:17.527661    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:17.527668    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:17.527676    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:17.527701    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:17.527712    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:19.528073    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 27
	I0816 06:12:19.528085    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:19.528187    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:19.528960    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:19.529005    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:19.529016    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:19.529028    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:19.529036    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:19.529050    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:19.529062    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:19.529075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:19.529084    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:19.529091    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:19.529099    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:19.529106    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:19.529114    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:19.529143    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:19.529153    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:19.529163    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:19.529171    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:19.529179    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:19.529191    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:19.529201    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:21.531135    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 28
	I0816 06:12:21.531147    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:21.531207    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:21.532016    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:21.532067    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:21.532082    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:21.532097    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:21.532111    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:21.532123    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:21.532134    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:21.532159    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:21.532178    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:21.532190    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:21.532200    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:21.532209    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:21.532217    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:21.532233    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:21.532241    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:21.532248    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:21.532254    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:21.532261    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:21.532269    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:21.532275    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:23.534012    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 29
	I0816 06:12:23.534022    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:23.534104    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:23.534925    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6a:94:31:47:f4:13 in /var/db/dhcpd_leases ...
	I0816 06:12:23.534978    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:12:23.534989    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:12:23.535007    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:12:23.535017    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:12:23.535032    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:12:23.535054    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:12:23.535075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:12:23.535084    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:12:23.535092    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:12:23.535100    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:12:23.535120    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:12:23.535134    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:12:23.535142    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:12:23.535147    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:12:23.535153    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:12:23.535168    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:12:23.535182    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:12:23.535193    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:12:23.535202    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:12:25.537128    5419 client.go:171] duration metric: took 1m0.824618156s to LocalClient.Create
	I0816 06:12:27.539235    5419 start.go:128] duration metric: took 1m2.85872976s to createHost
	I0816 06:12:27.539261    5419 start.go:83] releasing machines lock for "force-systemd-env-603000", held for 1m2.858851753s
	W0816 06:12:27.539311    5419 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:94:31:47:f4:13
	I0816 06:12:27.539680    5419 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:12:27.539704    5419 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:12:27.548411    5419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53749
	I0816 06:12:27.548737    5419 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:12:27.549076    5419 main.go:141] libmachine: Using API Version  1
	I0816 06:12:27.549093    5419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:12:27.549291    5419 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:12:27.549639    5419 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:12:27.549661    5419 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:12:27.558204    5419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53751
	I0816 06:12:27.558604    5419 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:12:27.558956    5419 main.go:141] libmachine: Using API Version  1
	I0816 06:12:27.558972    5419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:12:27.559206    5419 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:12:27.559322    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .GetState
	I0816 06:12:27.559397    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.559470    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:27.560404    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .DriverName
	I0816 06:12:27.623660    5419 out.go:177] * Deleting "force-systemd-env-603000" in hyperkit ...
	I0816 06:12:27.644627    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .Remove
	I0816 06:12:27.644765    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.644778    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.644847    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:27.645786    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:27.645847    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | waiting for graceful shutdown
	I0816 06:12:28.647935    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:28.648075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:28.648986    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | waiting for graceful shutdown
	I0816 06:12:29.649775    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:29.649868    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:29.651493    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | waiting for graceful shutdown
	I0816 06:12:30.652767    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:30.652866    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:30.653590    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | waiting for graceful shutdown
	I0816 06:12:31.654376    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:31.654446    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:31.655049    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | waiting for graceful shutdown
	I0816 06:12:32.657169    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:12:32.657232    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5436
	I0816 06:12:32.658296    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | sending sigkill
	I0816 06:12:32.658327    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0816 06:12:32.670086    5419 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:94:31:47:f4:13
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:94:31:47:f4:13
	I0816 06:12:32.670112    5419 start.go:729] Will try again in 5 seconds ...
	I0816 06:12:32.678345    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:12:32 WARN : hyperkit: failed to read stdout: EOF
	I0816 06:12:32.678367    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:12:32 WARN : hyperkit: failed to read stderr: EOF
	I0816 06:12:37.670877    5419 start.go:360] acquireMachinesLock for force-systemd-env-603000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:13:30.425030    5419 start.go:364] duration metric: took 52.755534014s to acquireMachinesLock for "force-systemd-env-603000"
	I0816 06:13:30.425061    5419 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:13:30.425116    5419 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:13:30.488204    5419 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 06:13:30.488303    5419 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:13:30.488325    5419 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:13:30.497181    5419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53755
	I0816 06:13:30.497621    5419 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:13:30.498114    5419 main.go:141] libmachine: Using API Version  1
	I0816 06:13:30.498133    5419 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:13:30.498430    5419 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:13:30.498545    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .GetMachineName
	I0816 06:13:30.498769    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .DriverName
	I0816 06:13:30.498934    5419 start.go:159] libmachine.API.Create for "force-systemd-env-603000" (driver="hyperkit")
	I0816 06:13:30.498964    5419 client.go:168] LocalClient.Create starting
	I0816 06:13:30.498992    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:13:30.499045    5419 main.go:141] libmachine: Decoding PEM data...
	I0816 06:13:30.499059    5419 main.go:141] libmachine: Parsing certificate...
	I0816 06:13:30.499105    5419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:13:30.499146    5419 main.go:141] libmachine: Decoding PEM data...
	I0816 06:13:30.499157    5419 main.go:141] libmachine: Parsing certificate...
	I0816 06:13:30.499170    5419 main.go:141] libmachine: Running pre-create checks...
	I0816 06:13:30.499176    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .PreCreateCheck
	I0816 06:13:30.499270    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:30.499293    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .GetConfigRaw
	I0816 06:13:30.509815    5419 main.go:141] libmachine: Creating machine...
	I0816 06:13:30.509825    5419 main.go:141] libmachine: (force-systemd-env-603000) Calling .Create
	I0816 06:13:30.509919    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:30.510057    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:13:30.509911    5462 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:13:30.510117    5419 main.go:141] libmachine: (force-systemd-env-603000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:13:30.854970    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:13:30.854901    5462 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/id_rsa...
	I0816 06:13:30.955572    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:13:30.955519    5462 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk...
	I0816 06:13:30.955589    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Writing magic tar header
	I0816 06:13:30.955616    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Writing SSH key tar header
	I0816 06:13:30.955964    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | I0816 06:13:30.955929    5462 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000 ...
	I0816 06:13:31.331854    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:31.331874    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid
	I0816 06:13:31.331933    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Using UUID c3edac1d-370e-4a59-9a2b-b3b28a70bca7
	I0816 06:13:31.358725    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Generated MAC 6:3b:75:2:ec:2f
	I0816 06:13:31.358740    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000
	I0816 06:13:31.358773    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c3edac1d-370e-4a59-9a2b-b3b28a70bca7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:13:31.358807    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c3edac1d-370e-4a59-9a2b-b3b28a70bca7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:13:31.358850    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c3edac1d-370e-4a59-9a2b-b3b28a70bca7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-sys
temd-env-603000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000"}
	I0816 06:13:31.358898    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c3edac1d-370e-4a59-9a2b-b3b28a70bca7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/force-systemd-env-603000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/bzimage,/Users/jenkins/minikube-integration/19
423-1009/.minikube/machines/force-systemd-env-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-603000"
	I0816 06:13:31.358921    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:13:31.361641    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 DEBUG: hyperkit: Pid is 5472
	I0816 06:13:31.362795    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 0
	I0816 06:13:31.362810    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:31.362877    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:31.363818    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:31.363877    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:31.363892    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:31.363904    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:31.363913    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:31.363925    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:31.363932    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:31.363959    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:31.363973    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:31.363980    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:31.363987    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:31.364003    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:31.364025    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:31.364037    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:31.364050    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:31.364063    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:31.364095    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:31.364108    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:31.364116    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:31.364124    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:31.369467    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:13:31.377401    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/force-systemd-env-603000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:13:31.378260    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:13:31.378273    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:13:31.378283    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:13:31.378291    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:13:31.755969    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:13:31.755984    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:13:31.870661    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:13:31.870684    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:13:31.870707    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:13:31.870719    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:13:31.871572    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:13:31.871583    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:13:33.364812    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 1
	I0816 06:13:33.364826    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:33.364954    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:33.365812    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:33.365892    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:33.365902    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:33.365909    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:33.365916    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:33.365925    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:33.365936    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:33.365946    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:33.365954    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:33.365962    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:33.365975    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:33.365990    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:33.366007    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:33.366021    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:33.366030    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:33.366038    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:33.366045    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:33.366053    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:33.366078    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:33.366093    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:35.367482    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 2
	I0816 06:13:35.367500    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:35.367583    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:35.368375    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:35.368443    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:35.368456    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:35.368468    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:35.368480    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:35.368492    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:35.368502    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:35.368512    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:35.368522    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:35.368548    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:35.368561    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:35.368570    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:35.368578    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:35.368585    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:35.368592    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:35.368610    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:35.368618    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:35.368625    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:35.368633    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:35.368642    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:37.254492    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 06:13:37.254643    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 06:13:37.254653    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 06:13:37.275473    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | 2024/08/16 06:13:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 06:13:37.369736    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 3
	I0816 06:13:37.369762    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:37.369939    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:37.371403    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:37.371507    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:37.371542    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:37.371561    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:37.371602    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:37.371626    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:37.371649    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:37.371666    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:37.371680    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:37.371695    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:37.371748    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:37.371764    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:37.371776    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:37.371787    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:37.371796    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:37.371807    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:37.371824    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:37.371840    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:37.371863    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:37.371881    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:39.372861    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 4
	I0816 06:13:39.372878    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:39.372968    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:39.373765    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:39.373830    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:39.373841    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:39.373854    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:39.373878    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:39.373894    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:39.373907    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:39.373921    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:39.373930    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:39.373942    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:39.373951    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:39.373960    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:39.373969    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:39.373975    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:39.373983    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:39.373990    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:39.373999    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:39.374006    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:39.374012    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:39.374032    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:41.375923    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 5
	I0816 06:13:41.375935    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:41.376006    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:41.376857    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:41.376912    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:41.376925    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:41.376934    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:41.376941    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:41.376952    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:41.376962    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:41.376969    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:41.376982    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:41.376990    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:41.377012    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:41.377033    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:41.377045    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:41.377054    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:41.377060    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:41.377078    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:41.377090    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:41.377114    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:41.377127    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:41.377137    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:43.379060    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 6
	I0816 06:13:43.379072    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:43.379153    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:43.379946    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:43.379995    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:43.380013    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:43.380021    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:43.380027    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:43.380038    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:43.380044    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:43.380052    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:43.380059    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:43.380075    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:43.380088    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:43.380106    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:43.380123    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:43.380136    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:43.380144    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:43.380153    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:43.380160    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:43.380169    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:43.380178    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:43.380185    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:45.381303    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 7
	I0816 06:13:45.381315    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:45.381372    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:45.382139    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:45.382183    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:45.382193    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:45.382208    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:45.382218    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:45.382245    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:45.382257    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:45.382281    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:45.382290    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:45.382297    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:45.382305    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:45.382319    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:45.382330    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:45.382341    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:45.382350    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:45.382365    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:45.382374    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:45.382389    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:45.382397    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:45.382409    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:47.382730    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 8
	I0816 06:13:47.382742    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:47.382805    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:47.383591    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:47.383667    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:47.383677    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:47.383690    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:47.383697    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:47.383731    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:47.383749    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:47.383758    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:47.383767    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:47.383775    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:47.383784    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:47.383801    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:47.383814    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:47.383822    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:47.383828    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:47.383835    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:47.383843    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:47.383859    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:47.383872    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:47.383882    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:49.385763    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 9
	I0816 06:13:49.385773    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:49.385876    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:49.386632    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:49.386676    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:49.386688    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:49.386714    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:49.386725    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:49.386733    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:49.386742    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:49.386749    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:49.386757    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:49.386764    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:49.386772    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:49.386779    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:49.386786    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:49.386799    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:49.386828    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:49.386858    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:49.386867    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:49.386875    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:49.386882    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:49.386897    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:51.388827    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 10
	I0816 06:13:51.388837    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:51.388902    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:51.389703    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:51.389756    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:51.389769    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:51.389780    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:51.389800    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:51.389812    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:51.389824    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:51.389837    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:51.389852    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:51.389864    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:51.389872    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:51.389883    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:51.389890    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:51.389904    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:51.389912    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:51.389920    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:51.389927    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:51.389945    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:51.389955    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:51.389977    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:53.390891    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 11
	I0816 06:13:53.390909    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:53.390939    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:53.391774    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:53.391820    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:53.391831    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:53.391842    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:53.391851    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:53.391859    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:53.391867    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:53.391877    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:53.391885    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:53.391900    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:53.391908    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:53.391916    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:53.391921    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:53.391935    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:53.391942    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:53.391951    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:53.391964    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:53.391973    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:53.391980    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:53.391989    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:55.393950    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 12
	I0816 06:13:55.393965    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:55.394084    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:55.394871    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:55.394908    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:55.394916    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:55.394924    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:55.394931    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:55.394938    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:55.394953    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:55.394964    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:55.394971    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:55.394985    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:55.395002    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:55.395011    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:55.395017    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:55.395024    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:55.395029    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:55.395037    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:55.395044    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:55.395052    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:55.395058    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:55.395078    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:57.395764    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 13
	I0816 06:13:57.395777    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:57.395848    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:57.396634    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:57.396690    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:57.396702    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:57.396715    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:57.396723    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:57.396731    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:57.396737    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:57.396746    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:57.396758    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:57.396765    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:57.396771    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:57.396780    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:57.396787    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:57.396796    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:57.396813    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:57.396825    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:57.396835    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:57.396843    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:57.396851    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:57.396859    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:13:59.397808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 14
	I0816 06:13:59.397828    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:13:59.397898    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:13:59.398693    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:13:59.398747    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:13:59.398759    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:13:59.398768    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:13:59.398776    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:13:59.398782    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:13:59.398789    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:13:59.398795    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:13:59.398808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:13:59.398820    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:13:59.398828    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:13:59.398837    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:13:59.398844    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:13:59.398853    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:13:59.398862    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:13:59.398869    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:13:59.398877    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:13:59.398883    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:13:59.398898    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:13:59.398910    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:01.399350    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 15
	I0816 06:14:01.399363    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:01.399426    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:01.400245    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:01.400306    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:01.400318    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:01.400331    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:01.400338    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:01.400345    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:01.400352    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:01.400359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:01.400365    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:01.400373    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:01.400382    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:01.400391    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:01.400399    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:01.400415    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:01.400426    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:01.400435    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:01.400442    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:01.400449    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:01.400457    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:01.400466    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:03.400726    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 16
	I0816 06:14:03.400737    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:03.400822    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:03.401625    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:03.401656    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:03.401681    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:03.401692    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:03.401702    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:03.401709    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:03.401715    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:03.401722    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:03.401748    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:03.401762    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:03.401769    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:03.401775    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:03.401782    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:03.401788    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:03.401808    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:03.401822    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:03.401833    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:03.401840    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:03.401854    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:03.401863    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:05.402095    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 17
	I0816 06:14:05.402110    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:05.402226    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:05.403026    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:05.403088    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:05.403100    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:05.403111    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:05.403121    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:05.403132    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:05.403141    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:05.403149    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:05.403155    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:05.403162    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:05.403168    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:05.403195    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:05.403208    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:05.403218    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:05.403225    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:05.403242    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:05.403255    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:05.403263    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:05.403279    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:05.403288    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:07.404288    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 18
	I0816 06:14:07.404302    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:07.404380    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:07.405162    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:07.405236    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:07.405252    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:07.405275    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:07.405285    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:07.405301    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:07.405314    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:07.405322    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:07.405331    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:07.405338    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:07.405346    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:07.405352    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:07.405359    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:07.405366    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:07.405382    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:07.405397    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:07.405407    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:07.405415    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:07.405421    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:07.405430    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:09.406358    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 19
	I0816 06:14:09.406372    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:09.406402    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:09.407241    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:09.407290    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:09.407303    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:09.407312    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:09.407324    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:09.407335    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:09.407344    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:09.407357    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:09.407366    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:09.407374    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:09.407386    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:09.407393    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:09.407398    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:09.407405    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:09.407412    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:09.407419    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:09.407429    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:09.407436    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:09.407443    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:09.407451    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:11.409477    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 20
	I0816 06:14:11.409490    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:11.409530    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:11.410534    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:11.410594    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:11.410605    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:11.410622    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:11.410633    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:11.410665    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:11.410677    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:11.410690    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:11.410699    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:11.410715    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:11.410725    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:11.410740    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:11.410754    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:11.410767    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:11.410778    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:11.410787    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:11.410794    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:11.410803    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:11.410812    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:11.410828    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:13.411099    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 21
	I0816 06:14:13.411112    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:13.411181    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:13.412192    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:13.412236    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:13.412248    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:13.412260    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:13.412270    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:13.412277    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:13.412286    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:13.412293    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:13.412300    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:13.412313    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:13.412329    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:13.412337    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:13.412343    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:13.412366    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:13.412386    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:13.412394    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:13.412423    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:13.412432    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:13.412441    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:13.412450    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:15.414387    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 22
	I0816 06:14:15.414400    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:15.414492    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:15.415463    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:15.415511    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:15.415524    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:15.415546    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:15.415561    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:15.415571    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:15.415579    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:15.415586    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:15.415592    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:15.415600    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:15.415607    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:15.415615    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:15.415622    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:15.415630    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:15.415637    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:15.415645    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:15.415660    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:15.415672    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:15.415692    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:15.415705    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:17.417629    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 23
	I0816 06:14:17.417644    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:17.417718    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:17.418495    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:17.418554    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:17.418564    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:17.418575    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:17.418581    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:17.418596    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:17.418610    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:17.418618    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:17.418627    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:17.418635    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:17.418642    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:17.418672    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:17.418685    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:17.418708    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:17.418743    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:17.418752    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:17.418761    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:17.418773    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:17.418781    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:17.418798    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:19.420656    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 24
	I0816 06:14:19.420672    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:19.420741    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:19.421521    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:19.421580    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:19.421590    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:19.421605    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:19.421613    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:19.421621    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:19.421626    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:19.421640    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:19.421651    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:19.421660    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:19.421668    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:19.421675    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:19.421683    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:19.421690    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:19.421697    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:19.421710    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:19.421724    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:19.421741    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:19.421754    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:19.421769    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:21.422616    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 25
	I0816 06:14:21.422627    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:21.422704    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:21.423475    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:21.423519    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:21.423532    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:21.423543    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:21.423550    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:21.423557    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:21.423564    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:21.423571    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:21.423590    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:21.423597    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:21.423604    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:21.423612    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:21.423623    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:21.423631    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:21.423638    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:21.423645    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:21.423655    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:21.423663    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:21.423670    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:21.423677    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:23.424478    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 26
	I0816 06:14:23.424490    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:23.424550    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:23.425386    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:23.425395    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:23.425404    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:23.425410    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:23.425417    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:23.425427    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:23.425434    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:23.425442    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:23.425450    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:23.425471    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:23.425483    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:23.425491    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:23.425500    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:23.425507    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:23.425516    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:23.425524    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:23.425532    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:23.425547    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:23.425561    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:23.425579    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:25.427243    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 27
	I0816 06:14:25.427257    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:25.427315    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:25.428156    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:25.428218    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:25.428230    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:25.428246    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:25.428256    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:25.428263    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:25.428271    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:25.428278    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:25.428287    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:25.428295    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:25.428303    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:25.428320    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:25.428332    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:25.428342    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:25.428350    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:25.428357    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:25.428365    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:25.428373    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:25.428380    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:25.428406    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:27.430330    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 28
	I0816 06:14:27.430344    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:27.430380    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:27.431159    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:27.431201    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:27.431214    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:27.431223    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:27.431230    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:27.431238    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:27.431248    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:27.431255    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:27.431262    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:27.431269    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:27.431276    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:27.431285    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:27.431298    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:27.431309    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:27.431320    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:27.431329    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:27.431356    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:27.431367    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:27.431374    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:27.431385    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:29.432438    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Attempt 29
	I0816 06:14:29.432450    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:14:29.432517    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | hyperkit pid from json: 5472
	I0816 06:14:29.433290    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Searching for 6:3b:75:2:ec:2f in /var/db/dhcpd_leases ...
	I0816 06:14:29.433349    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0816 06:14:29.433360    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:14:29.433368    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:14:29.433376    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:14:29.433396    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:14:29.433405    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:14:29.433414    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:14:29.433420    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:14:29.433427    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:14:29.433435    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:14:29.433449    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:14:29.433461    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:14:29.433500    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:14:29.433509    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:14:29.433516    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:14:29.433523    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:14:29.433544    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:14:29.433559    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:14:29.433578    5419 main.go:141] libmachine: (force-systemd-env-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:14:31.433931    5419 client.go:171] duration metric: took 1m0.936588313s to LocalClient.Create
	I0816 06:14:33.435966    5419 start.go:128] duration metric: took 1m3.012520484s to createHost
	I0816 06:14:33.435979    5419 start.go:83] releasing machines lock for "force-systemd-env-603000", held for 1m3.012618577s
	W0816 06:14:33.436085    5419 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-603000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6:3b:75:2:ec:2f
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-603000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6:3b:75:2:ec:2f
	I0816 06:14:33.499261    5419 out.go:201] 
	W0816 06:14:33.520509    5419 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6:3b:75:2:ec:2f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6:3b:75:2:ec:2f
	W0816 06:14:33.520525    5419 out.go:270] * 
	* 
	W0816 06:14:33.521161    5419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:14:33.583520    5419 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-603000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-603000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-603000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (178.251431ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-603000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-603000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-16 06:14:33.879263 -0700 PDT m=+3300.570551976
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-603000 -n force-systemd-env-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-603000 -n force-systemd-env-603000: exit status 7 (77.803203ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:14:33.955191    5488 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:14:33.955212    5488 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-603000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-603000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-603000: (5.272766876s)
--- FAIL: TestForceSystemdEnv (233.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (179.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-073000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0816 05:43:20.651479    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:43:27.988682    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:44:51.070188    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-073000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (2m55.487442982s)

                                                
                                                
-- stdout --
	* [ha-073000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-073000" primary control-plane node in "ha-073000" cluster
	* Restarting existing hyperkit VM for "ha-073000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-073000-m02" control-plane node in "ha-073000" cluster
	* Restarting existing hyperkit VM for "ha-073000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-073000-m04" worker node in "ha-073000" cluster
	* Restarting existing hyperkit VM for "ha-073000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:04.564740    3700 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:04.564910    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.564915    3700 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:04.564919    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.565081    3700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:43:04.566585    3700 out.go:352] Setting JSON to false
	I0816 05:43:04.588805    3700 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1962,"bootTime":1723810222,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:43:04.588897    3700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:04.613000    3700 out.go:177] * [ha-073000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:43:04.653806    3700 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:04.653862    3700 notify.go:220] Checking for updates...
	I0816 05:43:04.696885    3700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:04.717792    3700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:43:04.738830    3700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:04.759882    3700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 05:43:04.780629    3700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:04.802633    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:04.803322    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.803409    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.812971    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
	I0816 05:43:04.813324    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.813803    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.813822    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.814047    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.814164    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.814416    3700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:04.814654    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.814677    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.823004    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
	I0816 05:43:04.823356    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.823668    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.823676    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.823881    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.823986    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.852886    3700 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 05:43:04.873686    3700 start.go:297] selected driver: hyperkit
	I0816 05:43:04.873736    3700 start.go:901] validating driver "hyperkit" against &{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.873963    3700 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:04.874147    3700 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.874351    3700 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 05:43:04.884210    3700 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 05:43:04.888002    3700 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.888025    3700 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 05:43:04.890692    3700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:04.890731    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:04.890738    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:04.890804    3700 start.go:340] cluster config:
	{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.890902    3700 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.933833    3700 out.go:177] * Starting "ha-073000" primary control-plane node in "ha-073000" cluster
	I0816 05:43:04.954485    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:04.954567    3700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 05:43:04.954587    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:04.954798    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:04.954819    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:04.955011    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:04.955870    3700 start.go:360] acquireMachinesLock for ha-073000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:04.955995    3700 start.go:364] duration metric: took 100.576µs to acquireMachinesLock for "ha-073000"
	I0816 05:43:04.956044    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:04.956062    3700 fix.go:54] fixHost starting: 
	I0816 05:43:04.956492    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.956518    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.965467    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
	I0816 05:43:04.965836    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.966195    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.966210    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.966502    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.966647    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.966748    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:43:04.966849    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.966924    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3625
	I0816 05:43:04.967937    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:04.967979    3700 fix.go:112] recreateIfNeeded on ha-073000: state=Stopped err=<nil>
	I0816 05:43:04.968006    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	W0816 05:43:04.968088    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:05.010683    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000" ...
	I0816 05:43:05.031624    3700 main.go:141] libmachine: (ha-073000) Calling .Start
	I0816 05:43:05.031872    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.031897    3700 main.go:141] libmachine: (ha-073000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid
	I0816 05:43:05.033643    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:05.033659    3700 main.go:141] libmachine: (ha-073000) DBG | pid 3625 is in state "Stopped"
	I0816 05:43:05.033683    3700 main.go:141] libmachine: (ha-073000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid...
	I0816 05:43:05.034080    3700 main.go:141] libmachine: (ha-073000) DBG | Using UUID 449fd9a3-1c71-4e9a-9271-363ec4bdb253
	I0816 05:43:05.149249    3700 main.go:141] libmachine: (ha-073000) DBG | Generated MAC 36:31:25:a5:a2:ed
	I0816 05:43:05.149291    3700 main.go:141] libmachine: (ha-073000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:05.149397    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149433    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149473    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "449fd9a3-1c71-4e9a-9271-363ec4bdb253", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:05.149540    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 449fd9a3-1c71-4e9a-9271-363ec4bdb253 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:05.149556    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:05.150961    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Pid is 3714
	I0816 05:43:05.151298    3700 main.go:141] libmachine: (ha-073000) DBG | Attempt 0
	I0816 05:43:05.151311    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.151435    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:43:05.153225    3700 main.go:141] libmachine: (ha-073000) DBG | Searching for 36:31:25:a5:a2:ed in /var/db/dhcpd_leases ...
	I0816 05:43:05.153302    3700 main.go:141] libmachine: (ha-073000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:05.153320    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:05.153335    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:05.153348    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:05.153395    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09a1c}
	I0816 05:43:05.153412    3700 main.go:141] libmachine: (ha-073000) DBG | Found match: 36:31:25:a5:a2:ed
	I0816 05:43:05.153421    3700 main.go:141] libmachine: (ha-073000) Calling .GetConfigRaw
	I0816 05:43:05.153453    3700 main.go:141] libmachine: (ha-073000) DBG | IP: 192.169.0.5
	I0816 05:43:05.154140    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:05.154367    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:05.154767    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:05.154779    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:05.154938    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:05.155074    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:05.155194    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155310    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155408    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:05.155550    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:05.155750    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:05.155759    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:05.159119    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:05.211364    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:05.212077    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.212095    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.212103    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.212109    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.591470    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:05.591483    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:05.706454    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.706476    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.706490    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.706501    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.707461    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:05.707472    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:11.286594    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:43:11.286691    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:43:11.286700    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:43:11.310519    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:43:40.225322    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:40.225337    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225482    3700 buildroot.go:166] provisioning hostname "ha-073000"
	I0816 05:43:40.225493    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225593    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.225692    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.225793    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225892    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225986    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.226106    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.226271    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.226298    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000 && echo "ha-073000" | sudo tee /etc/hostname
	I0816 05:43:40.294551    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000
	
	I0816 05:43:40.294568    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.294702    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.294805    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.294917    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.295018    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.295131    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.295293    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.295303    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:40.357437    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:40.357455    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:40.357467    3700 buildroot.go:174] setting up certificates
	I0816 05:43:40.357475    3700 provision.go:84] configureAuth start
	I0816 05:43:40.357482    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.357611    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:40.357710    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.357800    3700 provision.go:143] copyHostCerts
	I0816 05:43:40.357831    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.357900    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:40.357908    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.358056    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:40.358263    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358303    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:40.358308    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358383    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:40.358527    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358564    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:40.358575    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358655    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:40.358790    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000 san=[127.0.0.1 192.169.0.5 ha-073000 localhost minikube]
	I0816 05:43:40.668742    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:40.668797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:40.668812    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.669020    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.669115    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.669208    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.669298    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:40.705870    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:40.705942    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:40.727099    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:40.727157    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 05:43:40.747334    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:40.747393    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:43:40.766795    3700 provision.go:87] duration metric: took 409.312981ms to configureAuth
	I0816 05:43:40.766810    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:40.766972    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:40.766985    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:40.767112    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.767214    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.767307    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767377    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767456    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.767585    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.767712    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.767720    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:40.823994    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:40.824009    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:40.824077    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:40.824089    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.824227    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.824329    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824430    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824516    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.824679    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.824819    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.824862    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:40.894312    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:40.894335    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.894465    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.894566    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894651    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894725    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.894858    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.895012    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.895025    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:43:42.619681    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:43:42.619696    3700 machine.go:96] duration metric: took 37.465655472s to provisionDockerMachine
	I0816 05:43:42.619707    3700 start.go:293] postStartSetup for "ha-073000" (driver="hyperkit")
	I0816 05:43:42.619714    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:43:42.619724    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.619902    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:43:42.619926    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.620017    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.620114    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.620221    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.620305    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.656447    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:43:42.659759    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:43:42.659773    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:43:42.659872    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:43:42.660059    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:43:42.660065    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:43:42.660269    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:43:42.667667    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:42.687880    3700 start.go:296] duration metric: took 68.167584ms for postStartSetup
	I0816 05:43:42.687899    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.688070    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:43:42.688083    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.688171    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.688267    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.688367    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.688456    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.722698    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:43:42.722761    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:43:42.776641    3700 fix.go:56] duration metric: took 37.82132494s for fixHost
	I0816 05:43:42.776663    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.776810    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.776931    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777033    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777125    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.777253    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:42.777390    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:42.777397    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:43:42.836399    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812222.925313054
	
	I0816 05:43:42.836411    3700 fix.go:216] guest clock: 1723812222.925313054
	I0816 05:43:42.836417    3700 fix.go:229] Guest: 2024-08-16 05:43:42.925313054 -0700 PDT Remote: 2024-08-16 05:43:42.776654 -0700 PDT m=+38.247448415 (delta=148.659054ms)
	I0816 05:43:42.836434    3700 fix.go:200] guest clock delta is within tolerance: 148.659054ms
	I0816 05:43:42.836437    3700 start.go:83] releasing machines lock for "ha-073000", held for 37.881174383s
	I0816 05:43:42.836457    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.836598    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:42.836699    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837049    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837160    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837249    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:43:42.837284    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837297    3700 ssh_runner.go:195] Run: cat /version.json
	I0816 05:43:42.837308    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837399    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837413    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837511    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837521    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837609    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837623    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837690    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.837711    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.913686    3700 ssh_runner.go:195] Run: systemctl --version
	I0816 05:43:42.918889    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:43:42.923312    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:43:42.923351    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:43:42.935697    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:43:42.935707    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:42.935801    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:42.953681    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:43:42.962535    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:43:42.971266    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:43:42.971307    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:43:42.979934    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:42.988664    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:43:42.997290    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:43.005918    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:43:43.014721    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:43:43.023404    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:43:43.032084    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:43:43.040766    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:43:43.048727    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:43:43.056628    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.160133    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:43:43.175551    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:43.175624    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:43:43.187204    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.198626    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:43:43.214407    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.226374    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.237460    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:43:43.257683    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.271060    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:43.289045    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:43:43.291949    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:43:43.299258    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:43:43.312470    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:43:43.422601    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:43:43.528683    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:43:43.528764    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:43:43.542650    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.653228    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:43:46.028721    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.375520385s)
	I0816 05:43:46.028781    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:43:46.040150    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.049993    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:43:46.143000    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:43:46.256755    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.354748    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:43:46.369090    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.380481    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.481851    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:43:46.546753    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:43:46.546835    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:43:46.551170    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:43:46.551219    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:43:46.554224    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:43:46.581136    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:43:46.581204    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.600242    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.641436    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:43:46.641483    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:46.641865    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:43:46.646502    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.656383    3700 kubeadm.go:883] updating cluster {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 05:43:46.656461    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:46.656510    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.670426    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.670438    3700 docker.go:615] Images already preloaded, skipping extraction
	I0816 05:43:46.670515    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.682547    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.682568    3700 cache_images.go:84] Images are preloaded, skipping loading
	I0816 05:43:46.682577    3700 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 05:43:46.682650    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:43:46.682717    3700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:43:46.717612    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:46.717631    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:46.717641    3700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:43:46.717661    3700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-073000 NodeName:ha-073000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:43:46.717752    3700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-073000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:43:46.717766    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:43:46.717818    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:43:46.732805    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:43:46.732879    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:43:46.732932    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:43:46.744741    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:43:46.744797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 05:43:46.752198    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 05:43:46.766525    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:43:46.779788    3700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 05:43:46.793230    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:43:46.806345    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:43:46.809072    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.818297    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.921223    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:43:46.935952    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.5
	I0816 05:43:46.935964    3700 certs.go:194] generating shared ca certs ...
	I0816 05:43:46.935976    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.936150    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:43:46.936228    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:43:46.936237    3700 certs.go:256] generating profile certs ...
	I0816 05:43:46.936323    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:43:46.936347    3700 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e
	I0816 05:43:46.936361    3700 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 05:43:46.977158    3700 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e ...
	I0816 05:43:46.977174    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e: {Name:mk8d6f44d0e237393798a574888fbd7c16b75ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977520    3700 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e ...
	I0816 05:43:46.977530    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e: {Name:mk0b98c1e535c8fd1781c44e6f22509b6b916e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977744    3700 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt
	I0816 05:43:46.977955    3700 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key
	I0816 05:43:46.978212    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:43:46.978221    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:43:46.978248    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:43:46.978268    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:43:46.978286    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:43:46.978305    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:43:46.978327    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:43:46.978345    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:43:46.978363    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:43:46.978461    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:43:46.978507    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:43:46.978516    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:43:46.978550    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:43:46.978580    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:43:46.978610    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:43:46.978674    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:46.978708    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:46.978729    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:43:46.978748    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:43:46.979212    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:43:47.006926    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:43:47.033030    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:43:47.064204    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:43:47.096328    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:43:47.140607    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:43:47.183767    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:43:47.225875    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:43:47.272651    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:43:47.321871    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:43:47.361863    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:43:47.392530    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:43:47.413203    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:43:47.419281    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:43:47.429288    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437638    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437698    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.445809    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:43:47.456922    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:43:47.468355    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473399    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473439    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.477636    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:43:47.487065    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:43:47.496174    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499485    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499517    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.503664    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:43:47.512642    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:43:47.516083    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:43:47.520349    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:43:47.524473    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:43:47.528697    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:43:47.532807    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:43:47.536987    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:43:47.541120    3700 kubeadm.go:392] StartCluster: {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:47.541240    3700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:43:47.554428    3700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:43:47.562930    3700 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:43:47.562950    3700 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:43:47.563002    3700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:43:47.571138    3700 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:43:47.571458    3700 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-073000" does not appear in /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.571540    3700 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1009/kubeconfig needs updating (will repair): [kubeconfig missing "ha-073000" cluster setting kubeconfig missing "ha-073000" context setting]
	I0816 05:43:47.571730    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.572561    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.572756    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:43:47.573052    3700 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 05:43:47.573236    3700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:43:47.581202    3700 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 05:43:47.581215    3700 kubeadm.go:597] duration metric: took 18.259849ms to restartPrimaryControlPlane
	I0816 05:43:47.581220    3700 kubeadm.go:394] duration metric: took 40.104743ms to StartCluster
	I0816 05:43:47.581228    3700 settings.go:142] acquiring lock: {Name:mkb3c8aac25c21025142737c3a236d96f65e9fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581298    3700 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.581626    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581845    3700 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:47.581857    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:43:47.581865    3700 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:43:47.581987    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.603872    3700 out.go:177] * Enabled addons: 
	I0816 05:43:47.645376    3700 addons.go:510] duration metric: took 63.484341ms for enable addons: enabled=[]
	I0816 05:43:47.645417    3700 start.go:246] waiting for cluster config update ...
	I0816 05:43:47.645429    3700 start.go:255] writing updated cluster config ...
	I0816 05:43:47.667512    3700 out.go:201] 
	I0816 05:43:47.689977    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.690106    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.712362    3700 out.go:177] * Starting "ha-073000-m02" control-plane node in "ha-073000" cluster
	I0816 05:43:47.754492    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:47.754528    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:47.754704    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:47.754723    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:47.754841    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.755736    3700 start.go:360] acquireMachinesLock for ha-073000-m02: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:47.755840    3700 start.go:364] duration metric: took 80.235µs to acquireMachinesLock for "ha-073000-m02"
	I0816 05:43:47.755868    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:47.755877    3700 fix.go:54] fixHost starting: m02
	I0816 05:43:47.756330    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:47.756357    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:47.765501    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0816 05:43:47.765944    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:47.766357    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:47.766399    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:47.766686    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:47.766840    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.766960    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:43:47.767043    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.767163    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3630
	I0816 05:43:47.768076    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.768103    3700 fix.go:112] recreateIfNeeded on ha-073000-m02: state=Stopped err=<nil>
	I0816 05:43:47.768113    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	W0816 05:43:47.768243    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:47.789281    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m02" ...
	I0816 05:43:47.810495    3700 main.go:141] libmachine: (ha-073000-m02) Calling .Start
	I0816 05:43:47.810746    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.810809    3700 main.go:141] libmachine: (ha-073000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid
	I0816 05:43:47.812579    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.812591    3700 main.go:141] libmachine: (ha-073000-m02) DBG | pid 3630 is in state "Stopped"
	I0816 05:43:47.812606    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid...
	I0816 05:43:47.812915    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Using UUID 2ecbd3fa-135d-470f-9281-b78e2fd82941
	I0816 05:43:47.840853    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Generated MAC 3a:16:de:25:18:f9
	I0816 05:43:47.840881    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:47.841024    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841093    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841131    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2ecbd3fa-135d-470f-9281-b78e2fd82941", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:47.841173    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2ecbd3fa-135d-470f-9281-b78e2fd82941 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:47.841196    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:47.842666    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Pid is 3719
	I0816 05:43:47.842981    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Attempt 0
	I0816 05:43:47.843001    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.843149    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3719
	I0816 05:43:47.845190    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Searching for 3a:16:de:25:18:f9 in /var/db/dhcpd_leases ...
	I0816 05:43:47.845245    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:47.845265    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:43:47.845294    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:47.845311    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:47.845326    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:47.845337    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found match: 3a:16:de:25:18:f9
	I0816 05:43:47.845356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetConfigRaw
	I0816 05:43:47.845364    3700 main.go:141] libmachine: (ha-073000-m02) DBG | IP: 192.169.0.6
	I0816 05:43:47.846051    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:47.846244    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.846807    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:47.846817    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.846948    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:47.847069    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:47.847170    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847430    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:47.847576    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:47.847744    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:47.847752    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:47.850417    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:47.859543    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:47.860408    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:47.860422    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:47.860431    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:47.860467    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.243712    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:48.243733    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:48.358576    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:48.358605    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:48.358619    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:48.358631    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.359399    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:48.359409    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:53.958154    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 05:43:53.958240    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 05:43:53.958254    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 05:43:53.983312    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 05:43:58.907700    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:58.907714    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907845    3700 buildroot.go:166] provisioning hostname "ha-073000-m02"
	I0816 05:43:58.907881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907973    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.908072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.908173    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.908483    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.908630    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.908640    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m02 && echo "ha-073000-m02" | sudo tee /etc/hostname
	I0816 05:43:58.968566    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m02
	
	I0816 05:43:58.968580    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.968718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.968818    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.968913    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.969011    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.969127    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.969267    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.969280    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:59.024122    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:59.024136    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:59.024144    3700 buildroot.go:174] setting up certificates
	I0816 05:43:59.024150    3700 provision.go:84] configureAuth start
	I0816 05:43:59.024156    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:59.024280    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:59.024383    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.024473    3700 provision.go:143] copyHostCerts
	I0816 05:43:59.024501    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024550    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:59.024556    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024690    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:59.024885    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.024915    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:59.024920    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.025027    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:59.025190    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025223    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:59.025228    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025295    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:59.025446    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m02 san=[127.0.0.1 192.169.0.6 ha-073000-m02 localhost minikube]
	I0816 05:43:59.071749    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:59.071798    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:59.071819    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.071951    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.072035    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.072105    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.072191    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:43:59.104582    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:59.104649    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:59.123906    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:59.123983    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:43:59.142982    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:59.143045    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:43:59.162066    3700 provision.go:87] duration metric: took 137.911741ms to configureAuth
	I0816 05:43:59.162078    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:59.162258    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:59.162271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:59.162402    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.162489    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.162572    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162650    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162733    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.162851    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.162983    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.162993    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:59.210853    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:59.210865    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:59.210945    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:59.210957    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.211118    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.211219    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211310    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211387    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.211514    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.211649    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.211694    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:59.271471    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:59.271487    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.271647    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.271739    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271846    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271935    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.272053    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.272197    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.272208    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:00.927291    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:00.927305    3700 machine.go:96] duration metric: took 13.080748192s to provisionDockerMachine
	I0816 05:44:00.927312    3700 start.go:293] postStartSetup for "ha-073000-m02" (driver="hyperkit")
	I0816 05:44:00.927320    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:00.927330    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:00.927511    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:00.927525    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:00.927652    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:00.927731    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:00.927829    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:00.927905    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:00.960594    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:00.964512    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:00.964524    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:00.964627    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:00.964771    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:00.964778    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:00.964934    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:00.975551    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:01.005513    3700 start.go:296] duration metric: took 78.192885ms for postStartSetup
	I0816 05:44:01.005559    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.005745    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:01.005758    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.005896    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.005983    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.006072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.006164    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.040756    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:01.040818    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:01.075264    3700 fix.go:56] duration metric: took 13.319647044s for fixHost
	I0816 05:44:01.075289    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.075435    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.075528    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075613    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.075847    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:01.075998    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:44:01.076006    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:01.125972    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812240.969906147
	
	I0816 05:44:01.125983    3700 fix.go:216] guest clock: 1723812240.969906147
	I0816 05:44:01.125988    3700 fix.go:229] Guest: 2024-08-16 05:44:00.969906147 -0700 PDT Remote: 2024-08-16 05:44:01.075279 -0700 PDT m=+56.546434198 (delta=-105.372853ms)
	I0816 05:44:01.125998    3700 fix.go:200] guest clock delta is within tolerance: -105.372853ms
	I0816 05:44:01.126002    3700 start.go:83] releasing machines lock for "ha-073000-m02", held for 13.370412469s
	I0816 05:44:01.126019    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.126142    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:01.147724    3700 out.go:177] * Found network options:
	I0816 05:44:01.167556    3700 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 05:44:01.188682    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.188720    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189649    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189985    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:01.190020    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	W0816 05:44:01.190130    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.190184    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190263    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:01.190286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.190352    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190515    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.190522    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190713    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.190722    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190891    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.191002    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	W0816 05:44:01.219673    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:01.219729    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:01.266010    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:01.266030    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.266137    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.282065    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:01.291072    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:01.299924    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.299972    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:01.308888    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.317715    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:01.326478    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.335362    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:01.344565    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:01.353443    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:01.362391    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:01.371153    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:01.379211    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:01.387397    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.485288    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:01.504163    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.504230    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:01.519289    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.533468    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:01.549919    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.560311    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.570439    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:01.589516    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.599936    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.614849    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:01.617987    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:01.625242    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:01.638690    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:01.731621    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:01.840350    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.840371    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:01.854317    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.960384    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:44:04.269941    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.309582879s)
	I0816 05:44:04.270007    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:44:04.280320    3700 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:44:04.292872    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.303371    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:44:04.393390    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:44:04.502895    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.604917    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:44:04.618462    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.629172    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.732241    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:44:04.796052    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:44:04.796135    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:44:04.800527    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:44:04.800578    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:44:04.803568    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:44:04.832000    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:44:04.832069    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.850869    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.890177    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:44:04.933118    3700 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 05:44:04.954934    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:04.955381    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:44:04.959881    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:04.969321    3700 mustload.go:65] Loading cluster: ha-073000
	I0816 05:44:04.969488    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:04.969741    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.969756    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.978313    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52098
	I0816 05:44:04.978649    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.979005    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.979022    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.979231    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.979362    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:44:04.979460    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:04.979527    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:44:04.980457    3700 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:44:04.980703    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.980719    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.989380    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52100
	I0816 05:44:04.989872    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.990229    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.990239    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.990441    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.990567    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:44:04.990667    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.6
	I0816 05:44:04.990673    3700 certs.go:194] generating shared ca certs ...
	I0816 05:44:04.990681    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:44:04.990819    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:44:04.990876    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:44:04.990885    3700 certs.go:256] generating profile certs ...
	I0816 05:44:04.990968    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:44:04.991052    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.852e3a00
	I0816 05:44:04.991104    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:44:04.991115    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:44:04.991137    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:44:04.991158    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:44:04.991181    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:44:04.991203    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:44:04.991224    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:44:04.991243    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:44:04.991260    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:44:04.991336    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:44:04.991373    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:44:04.991382    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:44:04.991415    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:44:04.991446    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:44:04.991475    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:44:04.991545    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:04.991577    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:44:04.991598    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:04.991616    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:44:04.991641    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:44:04.991732    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:44:04.991816    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:44:04.991895    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:44:04.991976    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:44:05.018674    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 05:44:05.021887    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 05:44:05.030501    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 05:44:05.033440    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 05:44:05.041955    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 05:44:05.044846    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 05:44:05.053721    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 05:44:05.056775    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 05:44:05.065337    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 05:44:05.068254    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 05:44:05.076761    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 05:44:05.079704    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 05:44:05.088144    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:44:05.108529    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:44:05.128319    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:44:05.148205    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:44:05.168044    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:44:05.187959    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:44:05.207850    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:44:05.227864    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:44:05.247806    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:44:05.267586    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:44:05.287321    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:44:05.307517    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 05:44:05.321001    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 05:44:05.334635    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 05:44:05.348115    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 05:44:05.361521    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 05:44:05.375128    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 05:44:05.388391    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 05:44:05.402014    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:44:05.406108    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:44:05.414347    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417650    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417685    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.421754    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:44:05.429962    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:44:05.438138    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441411    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441444    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.445615    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:44:05.453740    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:44:05.462021    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465413    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465453    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.469602    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:44:05.477722    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:44:05.481045    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:44:05.485278    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:44:05.489478    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:44:05.493769    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:44:05.497993    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:44:05.502305    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:44:05.506534    3700 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 05:44:05.506585    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:44:05.506599    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:44:05.506631    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:44:05.518840    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:44:05.518872    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:44:05.518938    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:44:05.527488    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:44:05.527548    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 05:44:05.535755    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 05:44:05.549218    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:44:05.562474    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:44:05.575901    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:44:05.578825    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:05.588727    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.694671    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.710202    3700 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:44:05.710412    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:05.731897    3700 out.go:177] * Verifying Kubernetes components...
	I0816 05:44:05.773259    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.888127    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.905019    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:44:05.905207    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 05:44:05.905240    3700 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 05:44:05.905409    3700 node_ready.go:35] waiting up to 6m0s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:05.905490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:05.905495    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:05.905503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:05.905507    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.204653    3700 round_trippers.go:574] Response Status: 200 OK in 8299 milliseconds
	I0816 05:44:14.205215    3700 node_ready.go:49] node "ha-073000-m02" has status "Ready":"True"
	I0816 05:44:14.205228    3700 node_ready.go:38] duration metric: took 8.299966036s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:14.205235    3700 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:14.205277    3700 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:44:14.205286    3700 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:44:14.205323    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:14.205327    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.205333    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.205336    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.223247    3700 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0816 05:44:14.231122    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.231187    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2fdpw
	I0816 05:44:14.231192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.231198    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.231208    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.240205    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:14.240681    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.240689    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.240695    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.240699    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.247571    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:14.247971    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.247981    3700 pod_ready.go:82] duration metric: took 16.842454ms for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.247988    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.248023    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vf22s
	I0816 05:44:14.248028    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.248034    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.248038    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252093    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.252500    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.252508    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.252513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.255102    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.255471    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.255482    3700 pod_ready.go:82] duration metric: took 7.488195ms for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255489    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255538    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000
	I0816 05:44:14.255543    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.255549    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.255554    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257423    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.257786    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.257793    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.257798    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257802    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:14.261582    3700 pod_ready.go:93] pod "etcd-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.261592    3700 pod_ready.go:82] duration metric: took 6.098581ms for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261599    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261644    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m02
	I0816 05:44:14.261649    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.261654    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261658    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.264072    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.264627    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:14.264635    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.264640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.264645    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.267306    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.267636    3700 pod_ready.go:93] pod "etcd-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.267645    3700 pod_ready.go:82] duration metric: took 6.041319ms for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267652    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267706    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m03
	I0816 05:44:14.267711    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.267716    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.267726    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.269558    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.406286    3700 request.go:632] Waited for 136.053726ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406320    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406325    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.406330    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.406334    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.412790    3700 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0816 05:44:14.412989    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413000    3700 pod_ready.go:82] duration metric: took 145.343663ms for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:14.413019    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413037    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.606275    3700 request.go:632] Waited for 193.204942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606325    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606330    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.606342    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.606346    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.611263    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.806263    3700 request.go:632] Waited for 194.483786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806300    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806306    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.806312    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.806316    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.808457    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.809016    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.809026    3700 pod_ready.go:82] duration metric: took 395.988936ms for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.809033    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:15.005594    3700 request.go:632] Waited for 196.505275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005624    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005630    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.005637    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.005640    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.010212    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.206584    3700 request.go:632] Waited for 195.946236ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206645    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206685    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.206691    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.206695    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.211350    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.405410    3700 request.go:632] Waited for 95.393387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405469    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405474    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.405479    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.405483    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.408080    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.605592    3700 request.go:632] Waited for 196.04685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605628    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605634    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.605640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.605644    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.607860    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.810998    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.811014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.811021    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.811029    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.813293    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:16.005626    3700 request.go:632] Waited for 191.969847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005743    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005754    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.005765    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.005773    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.008807    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.309801    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.309825    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.309836    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.309844    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.313121    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.407323    3700 request.go:632] Waited for 93.416086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407387    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407397    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.407409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.407424    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.410882    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.810461    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.810486    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.810498    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.810504    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.813546    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.814282    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.814289    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.814295    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.814298    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.816149    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:16.816456    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:17.309900    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.309921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.309932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.309937    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.312735    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:17.313209    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.313218    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.313223    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.313233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.314796    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:17.809685    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.809718    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.809758    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.809767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.813579    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:17.814147    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.814157    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.814165    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.814169    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.815986    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.309824    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.309839    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.309845    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.309850    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.312500    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:18.312950    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.312958    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.312964    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.312968    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.317556    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.811340    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.811362    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.811380    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.811389    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.815578    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.816331    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.816338    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.816343    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.816347    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.818287    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.818637    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:19.309154    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.309213    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.309226    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.309244    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.313107    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.313580    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.313589    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.313597    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.313601    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.315208    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:19.810298    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.810320    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.810332    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.810338    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.813934    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.814561    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.814571    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.814579    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.814589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.816289    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.309290    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.309312    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.309322    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.309328    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.313244    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.313715    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.313724    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.313731    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.313737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.315554    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.809680    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.809710    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.809723    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.809735    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.813009    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.813665    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.813674    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.813682    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.813686    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.815508    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.309619    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.309640    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.309667    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.309675    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.313585    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.314167    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.314174    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.314179    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.314182    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.315676    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.316053    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:21.809228    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.809250    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.809261    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.809267    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.812952    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.813489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.813500    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.813508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.813512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.815094    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.310261    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.310287    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.310299    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.310305    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.314627    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:22.314992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.314999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.315005    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.315008    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.316747    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.810493    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.810515    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.810526    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.810532    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814082    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:22.814652    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.814660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.814666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814670    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.816180    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.310190    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.310217    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.310228    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.310235    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.314496    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:23.314922    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.314929    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.314935    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.314939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.316481    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.316841    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:23.809175    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.809187    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.809202    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.809207    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.811160    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.811560    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.811568    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.811574    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.811578    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.814714    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.309762    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:24.309784    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.309796    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.309802    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.313492    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.314086    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.314097    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.314106    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.314111    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.315684    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.316026    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:24.316036    3700 pod_ready.go:82] duration metric: took 9.507184684s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316045    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316078    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m03
	I0816 05:44:24.316086    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.316091    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.316095    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.317489    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.317864    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:24.317872    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.317877    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.317881    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.319230    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:24.319275    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319288    3700 pod_ready.go:82] duration metric: took 3.236554ms for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.319295    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319299    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.319330    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000
	I0816 05:44:24.319335    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.319340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.319344    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.320953    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.321429    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:24.321437    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.321442    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.321446    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.322965    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.323320    3700 pod_ready.go:98] node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323329    3700 pod_ready.go:82] duration metric: took 4.023708ms for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.323334    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323339    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.323367    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.323371    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.323379    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.323384    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.324781    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.325216    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.325223    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.325229    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.325233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.326748    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.824459    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.824484    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.824494    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.824506    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828252    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.828701    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.828712    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.828719    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828723    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.830277    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.323827    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.323852    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.323864    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.323877    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327155    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:25.327737    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.327744    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.327750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327754    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.329624    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.824109    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.824127    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.824136    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.824142    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.826476    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:25.827100    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.827108    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.827113    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.827117    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.828738    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.323567    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.323611    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.323617    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.323621    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.325453    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.325886    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.325894    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.325900    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.325904    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.327286    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.327610    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:26.823816    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.823841    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.823852    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.823860    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.827261    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:26.827821    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.827831    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.827839    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.827844    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.829686    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.323970    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.323996    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.324008    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.324015    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.327573    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.328047    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.328056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.328063    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.328067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.329875    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.823992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.824023    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.824082    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.824091    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.827309    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.827980    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.827987    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.827993    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.827998    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.829445    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:28.324903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.324920    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.324929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.324933    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.327085    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.327489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.327497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.327503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.327506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.329732    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.330047    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:28.823366    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.823382    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.823401    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.823422    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.846246    3700 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0816 05:44:28.846781    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.846789    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.846795    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.846803    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.855350    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:29.324024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.324057    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.324064    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.324067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.326984    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.327546    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.327553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.327559    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.327563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.330445    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.824279    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.824299    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.824306    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.824310    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.827888    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:29.828505    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.828512    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.828518    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.828522    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.830193    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.323608    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.323627    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.323635    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.323639    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327262    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:30.327789    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.327798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.327803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327807    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.329683    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.330034    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:30.823965    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.823999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.824072    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.824083    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.828534    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:30.829026    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.829034    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.829040    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.829044    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.830921    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.324089    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:31.324113    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.324130    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.324137    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.328896    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:31.329571    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:31.329579    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.329585    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.329589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.331878    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:31.332446    3700 pod_ready.go:93] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:31.332455    3700 pod_ready.go:82] duration metric: took 7.009249215s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332462    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332502    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m03
	I0816 05:44:31.332507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.332512    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.332516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.334084    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.334465    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:31.334472    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.334477    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.334480    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.335893    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:31.335965    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335979    3700 pod_ready.go:82] duration metric: took 3.51153ms for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:31.335986    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335991    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.336024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.336029    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.336035    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.336038    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.337516    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.338235    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.338242    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.338248    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.338254    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.339975    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.837844    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.837869    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.837881    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.837927    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841316    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:31.841903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.841910    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.841916    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841919    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.843493    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.336771    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.336798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.336809    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.336816    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.340935    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:32.341412    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.341420    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.341426    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.341429    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.342957    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.838157    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.838212    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.838225    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.838232    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.841711    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:32.842249    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.842259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.842267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.842272    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.843815    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.337329    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.337354    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.337366    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.337372    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.341232    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.341870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.341877    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.341883    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.341887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.343419    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.343689    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:33.836128    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.836154    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.836164    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.836170    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.840006    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.840641    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.840650    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.840658    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.840663    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.842504    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.337618    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.337683    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.337693    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.337698    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.339996    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.340499    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.340507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.340513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.340517    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.342040    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.836185    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.836258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.836268    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.836274    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.838913    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.839391    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.839398    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.839404    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.839409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.840888    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:35.336164    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.336192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.336242    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.336256    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.344590    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:35.345076    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.345083    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.345089    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.345106    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.351725    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:35.352079    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:35.838186    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.838207    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.838219    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.838225    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.841779    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:35.842361    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.842368    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.842373    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.842376    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.844076    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.336349    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.336372    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.336387    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.336393    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.339759    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.340248    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.340258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.340267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.340273    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.341840    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.836286    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.836309    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.836320    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.836326    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.839632    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.840490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.840497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.840503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.840506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.842131    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.337695    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.337717    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.337729    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.337736    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.341389    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.341954    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.341964    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.341972    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.341977    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.343432    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.837030    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.837056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.837073    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.837092    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.840202    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.840916    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.840924    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.840929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.840934    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.842593    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.843036    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:38.336396    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.336421    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.336432    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.336441    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.340051    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.340807    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.340818    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.340826    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.340831    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.342328    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:38.836968    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.836993    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.837004    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.837009    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.840369    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.840942    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.840953    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.840961    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.840966    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.842959    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.337347    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.337374    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.337385    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.337391    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.340872    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.341545    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.341553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.341560    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.341563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.343528    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.836514    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.836585    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.836604    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.836610    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.839854    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.840266    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.840275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.840282    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.840287    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.841976    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.337117    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.337140    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.337151    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.337157    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.340623    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:40.341081    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.341089    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.341095    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.341099    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.342480    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.342868    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:40.836255    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.836275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.836287    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.836294    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.839119    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:40.839650    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.839660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.839666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.839671    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.841284    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.336308    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.336328    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.336340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.336356    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.339424    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.339982    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.339990    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.339995    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.339999    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.341644    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.837468    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.837489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.837501    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.837508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.841276    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.842038    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.842045    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.842051    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.842055    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.843559    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.336716    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.336731    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.336737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.336740    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.342330    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:42.343356    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.343364    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.343370    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.343373    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.351635    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:42.352611    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:42.836673    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.836700    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.836711    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.836719    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.840138    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:42.840742    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.840753    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.840762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.840767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.842386    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.842727    3700 pod_ready.go:93] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.842736    3700 pod_ready.go:82] duration metric: took 11.506966083s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842743    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842773    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27jt
	I0816 05:44:42.842778    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.842783    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.842788    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.844352    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.844828    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:42.844835    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.844841    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.844845    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.846280    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.846557    3700 pod_ready.go:93] pod "kube-proxy-c27jt" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.846565    3700 pod_ready.go:82] duration metric: took 3.817397ms for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846572    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846601    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr2c8
	I0816 05:44:42.846605    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.846612    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.846615    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.848062    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.848495    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:42.848503    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.848509    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.848512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.849798    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:42.849858    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849868    3700 pod_ready.go:82] duration metric: took 3.291408ms for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:42.849874    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849879    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.849912    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wcgdv
	I0816 05:44:42.849917    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.849922    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.849925    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.851357    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.851732    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m04
	I0816 05:44:42.851740    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.851745    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.851750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.853123    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.853436    3700 pod_ready.go:93] pod "kube-proxy-wcgdv" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.853444    3700 pod_ready.go:82] duration metric: took 3.55866ms for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853450    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853478    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000
	I0816 05:44:42.853482    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.853488    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.853492    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.854845    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.855143    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.855150    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.855155    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.855160    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.856490    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.856772    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.856781    3700 pod_ready.go:82] duration metric: took 3.32627ms for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.856793    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.037823    3700 request.go:632] Waited for 180.948071ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037884    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037896    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.037908    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.037918    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.041274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.238753    3700 request.go:632] Waited for 196.999605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238909    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.238932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.238939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.242465    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.242850    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:43.242864    3700 pod_ready.go:82] duration metric: took 386.071689ms for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.242873    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.436910    3700 request.go:632] Waited for 193.992761ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437002    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.437025    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.437033    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.439940    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:43.637222    3700 request.go:632] Waited for 196.770029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637254    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.637265    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.637270    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.638883    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:43.638942    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638952    3700 pod_ready.go:82] duration metric: took 396.081081ms for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:43.638959    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638964    3700 pod_ready.go:39] duration metric: took 29.434296561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:43.638986    3700 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:44:43.639045    3700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:44:43.650685    3700 api_server.go:72] duration metric: took 37.941199778s to wait for apiserver process to appear ...
	I0816 05:44:43.650696    3700 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:44:43.650717    3700 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 05:44:43.653719    3700 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 05:44:43.653750    3700 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 05:44:43.653755    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.653762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.653766    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.654323    3700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 05:44:43.654424    3700 api_server.go:141] control plane version: v1.31.0
	I0816 05:44:43.654434    3700 api_server.go:131] duration metric: took 3.733932ms to wait for apiserver health ...
	I0816 05:44:43.654442    3700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 05:44:43.838724    3700 request.go:632] Waited for 184.226134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838846    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838861    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.838873    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.838887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.845134    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:43.850534    3700 system_pods.go:59] 26 kube-system pods found
	I0816 05:44:43.850556    3700 system_pods.go:61] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850563    3700 system_pods.go:61] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850568    3700 system_pods.go:61] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:43.850572    3700 system_pods.go:61] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:43.850575    3700 system_pods.go:61] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:43.850577    3700 system_pods.go:61] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:43.850582    3700 system_pods.go:61] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:43.850586    3700 system_pods.go:61] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:43.850589    3700 system_pods.go:61] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:43.850592    3700 system_pods.go:61] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:43.850594    3700 system_pods.go:61] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:43.850597    3700 system_pods.go:61] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:43.850600    3700 system_pods.go:61] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:43.850603    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:43.850605    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:43.850608    3700 system_pods.go:61] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:43.850611    3700 system_pods.go:61] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:43.850613    3700 system_pods.go:61] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:43.850616    3700 system_pods.go:61] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:43.850618    3700 system_pods.go:61] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:43.850623    3700 system_pods.go:61] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:43.850627    3700 system_pods.go:61] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:43.850629    3700 system_pods.go:61] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:43.850632    3700 system_pods.go:61] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:43.850635    3700 system_pods.go:61] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:43.850637    3700 system_pods.go:61] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:43.850641    3700 system_pods.go:74] duration metric: took 196.198757ms to wait for pod list to return data ...
	I0816 05:44:43.850647    3700 default_sa.go:34] waiting for default service account to be created ...
	I0816 05:44:44.037355    3700 request.go:632] Waited for 186.643021ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037480    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.037499    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.037514    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.040844    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.041066    3700 default_sa.go:45] found service account: "default"
	I0816 05:44:44.041079    3700 default_sa.go:55] duration metric: took 190.431399ms for default service account to be created ...
	I0816 05:44:44.041086    3700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 05:44:44.237668    3700 request.go:632] Waited for 196.520766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237780    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237791    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.237803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.237812    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.243185    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:44.248662    3700 system_pods.go:86] 26 kube-system pods found
	I0816 05:44:44.248675    3700 system_pods.go:89] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248680    3700 system_pods.go:89] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248688    3700 system_pods.go:89] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:44.248691    3700 system_pods.go:89] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:44.248694    3700 system_pods.go:89] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:44.248697    3700 system_pods.go:89] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:44.248701    3700 system_pods.go:89] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:44.248705    3700 system_pods.go:89] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:44.248708    3700 system_pods.go:89] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:44.248711    3700 system_pods.go:89] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:44.248714    3700 system_pods.go:89] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:44.248717    3700 system_pods.go:89] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:44.248720    3700 system_pods.go:89] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:44.248723    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:44.248726    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:44.248728    3700 system_pods.go:89] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:44.248731    3700 system_pods.go:89] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:44.248734    3700 system_pods.go:89] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:44.248738    3700 system_pods.go:89] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:44.248742    3700 system_pods.go:89] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:44.248745    3700 system_pods.go:89] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:44.248748    3700 system_pods.go:89] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:44.248751    3700 system_pods.go:89] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:44.248756    3700 system_pods.go:89] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:44.248759    3700 system_pods.go:89] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:44.248761    3700 system_pods.go:89] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:44.248766    3700 system_pods.go:126] duration metric: took 207.679371ms to wait for k8s-apps to be running ...
	I0816 05:44:44.248773    3700 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 05:44:44.248823    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:44:44.259672    3700 system_svc.go:56] duration metric: took 10.896688ms WaitForService to wait for kubelet
	I0816 05:44:44.259685    3700 kubeadm.go:582] duration metric: took 38.550213651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:44:44.259697    3700 node_conditions.go:102] verifying NodePressure condition ...
	I0816 05:44:44.438728    3700 request.go:632] Waited for 178.976716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438882    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.438928    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.438938    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.442848    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.443702    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443718    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443727    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443730    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443734    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443737    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443741    3700 node_conditions.go:105] duration metric: took 184.043638ms to run NodePressure ...
	I0816 05:44:44.443749    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:44:44.443767    3700 start.go:255] writing updated cluster config ...
	I0816 05:44:44.469062    3700 out.go:201] 
	I0816 05:44:44.489551    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:44.489670    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.512183    3700 out.go:177] * Starting "ha-073000-m04" worker node in "ha-073000" cluster
	I0816 05:44:44.554442    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:44:44.554478    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:44:44.554690    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:44:44.554709    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:44:44.554824    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.555855    3700 start.go:360] acquireMachinesLock for ha-073000-m04: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:44.555978    3700 start.go:364] duration metric: took 98.145µs to acquireMachinesLock for "ha-073000-m04"
	I0816 05:44:44.556004    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:44.556011    3700 fix.go:54] fixHost starting: m04
	I0816 05:44:44.556446    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:44.556472    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:44.565770    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52105
	I0816 05:44:44.566121    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:44.566496    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:44.566517    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:44.566729    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:44.566845    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.566927    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:44:44.567001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.567096    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3643
	I0816 05:44:44.568001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.568039    3700 fix.go:112] recreateIfNeeded on ha-073000-m04: state=Stopped err=<nil>
	I0816 05:44:44.568049    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	W0816 05:44:44.568121    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:44.606366    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m04" ...
	I0816 05:44:44.663139    3700 main.go:141] libmachine: (ha-073000-m04) Calling .Start
	I0816 05:44:44.663315    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.663399    3700 main.go:141] libmachine: (ha-073000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid
	I0816 05:44:44.664399    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.664426    3700 main.go:141] libmachine: (ha-073000-m04) DBG | pid 3643 is in state "Stopped"
	I0816 05:44:44.664490    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid...
	I0816 05:44:44.664647    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Using UUID f2db23bc-c2a0-4ea2-9158-e93c928b5416
	I0816 05:44:44.689456    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Generated MAC f2:da:75:16:53:b7
	I0816 05:44:44.689481    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:44:44.689607    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689641    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689691    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f2db23bc-c2a0-4ea2-9158-e93c928b5416", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:44:44.689730    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f2db23bc-c2a0-4ea2-9158-e93c928b5416 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:44:44.689749    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:44:44.691094    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Pid is 3728
	I0816 05:44:44.691611    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Attempt 0
	I0816 05:44:44.691627    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.691728    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3728
	I0816 05:44:44.693940    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Searching for f2:da:75:16:53:b7 in /var/db/dhcpd_leases ...
	I0816 05:44:44.694051    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:44:44.694092    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 05:44:44.694127    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:44:44.694155    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:44:44.694170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetConfigRaw
	I0816 05:44:44.694173    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:44:44.694200    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found match: f2:da:75:16:53:b7
	I0816 05:44:44.694236    3700 main.go:141] libmachine: (ha-073000-m04) DBG | IP: 192.169.0.8
	I0816 05:44:44.695065    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:44.695278    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.695692    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:44:44.695703    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.695833    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:44.695931    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:44.696050    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696166    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696254    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:44.696382    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:44.696563    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:44.696574    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:44:44.699477    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:44:44.708676    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:44:44.709681    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:44.709706    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:44.709738    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:44.709752    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.096454    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:44:45.096470    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:44:45.211316    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:45.211336    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:45.211351    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:45.211358    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.212223    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:44:45.212237    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:44:50.828020    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:44:50.828090    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:44:50.828101    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:44:50.851950    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:44:55.760625    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:44:55.760639    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760793    3700 buildroot.go:166] provisioning hostname "ha-073000-m04"
	I0816 05:44:55.760805    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760899    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.760990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.761085    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761159    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761232    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.761366    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.761519    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.761528    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m04 && echo "ha-073000-m04" | sudo tee /etc/hostname
	I0816 05:44:55.833156    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m04
	
	I0816 05:44:55.833170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.833308    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.833414    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833503    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833603    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.833738    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.833899    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.833910    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:44:55.900349    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:44:55.900365    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:44:55.900383    3700 buildroot.go:174] setting up certificates
	I0816 05:44:55.900391    3700 provision.go:84] configureAuth start
	I0816 05:44:55.900398    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.900534    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:55.900638    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.900717    3700 provision.go:143] copyHostCerts
	I0816 05:44:55.900744    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900810    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:44:55.900816    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900947    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:44:55.901143    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901190    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:44:55.901195    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901271    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:44:55.901417    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901455    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:44:55.901460    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901535    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:44:55.901685    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m04 san=[127.0.0.1 192.169.0.8 ha-073000-m04 localhost minikube]
	I0816 05:44:56.021206    3700 provision.go:177] copyRemoteCerts
	I0816 05:44:56.021264    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:44:56.021279    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.021423    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.021518    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.021612    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.021689    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:56.060318    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:44:56.060388    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:44:56.079682    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:44:56.079759    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:44:56.100671    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:44:56.100755    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:44:56.120911    3700 provision.go:87] duration metric: took 220.512292ms to configureAuth
	I0816 05:44:56.120927    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:44:56.121094    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:56.121108    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:56.121244    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.121333    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.121413    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121488    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121577    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.121685    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.121810    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.121818    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:44:56.183314    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:44:56.183328    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:44:56.183405    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:44:56.183418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.183543    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.183624    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183720    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183811    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.183942    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.184086    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.184135    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:44:56.259224    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:44:56.259247    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.259375    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.259477    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259561    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259648    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.259767    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.259901    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.259914    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:57.846578    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:57.846593    3700 machine.go:96] duration metric: took 13.151151754s to provisionDockerMachine
	I0816 05:44:57.846601    3700 start.go:293] postStartSetup for "ha-073000-m04" (driver="hyperkit")
	I0816 05:44:57.846608    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:57.846619    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.846827    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:57.846841    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.846963    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.847057    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.847190    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.847325    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.890251    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:57.893714    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:57.893725    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:57.893828    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:57.894005    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:57.894011    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:57.894210    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:57.903672    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:57.936540    3700 start.go:296] duration metric: took 89.932708ms for postStartSetup
	I0816 05:44:57.936562    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.936732    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:57.936743    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.936825    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.936908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.936990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.937072    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.974376    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:57.974431    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:58.026259    3700 fix.go:56] duration metric: took 13.470511319s for fixHost
	I0816 05:44:58.026289    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.026437    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.026567    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026661    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026739    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.026870    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:58.027046    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:58.027055    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:58.089267    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812298.037026032
	
	I0816 05:44:58.089280    3700 fix.go:216] guest clock: 1723812298.037026032
	I0816 05:44:58.089285    3700 fix.go:229] Guest: 2024-08-16 05:44:58.037026032 -0700 PDT Remote: 2024-08-16 05:44:58.026278 -0700 PDT m=+113.498555850 (delta=10.748032ms)
	I0816 05:44:58.089296    3700 fix.go:200] guest clock delta is within tolerance: 10.748032ms
	I0816 05:44:58.089300    3700 start.go:83] releasing machines lock for "ha-073000-m04", held for 13.533577972s
	I0816 05:44:58.089315    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.089444    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:58.113019    3700 out.go:177] * Found network options:
	I0816 05:44:58.133803    3700 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 05:44:58.154869    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.154894    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.154908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155540    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155619    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:58.155647    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	W0816 05:44:58.155674    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.155690    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.155757    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:58.155778    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.155796    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155925    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155946    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156056    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156076    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156184    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156198    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:58.156285    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	W0816 05:44:58.193631    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:58.193701    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:58.236070    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:58.236085    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.236153    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.252488    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:58.262662    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:58.272809    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.272876    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:58.283088    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.293199    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:58.302692    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.312080    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:58.321436    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:58.330649    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:58.339785    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:58.349176    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:58.357543    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:58.365884    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.462788    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:58.483641    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.483717    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:58.502138    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.514733    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:58.534512    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.547599    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.558372    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:58.578053    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.588770    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.604147    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:58.607001    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:58.614131    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:58.627780    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:58.724561    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:58.838116    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.838140    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:58.852167    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.944841    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:45:59.852967    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.909307795s)
	I0816 05:45:59.853035    3700 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 05:45:59.887051    3700 out.go:201] 
	W0816 05:45:59.908317    3700 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 12:44:56 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.477961385Z" level=info msg="Starting up"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.478651123Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.479149818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=492
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.497251014Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512736016Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512786960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512832906Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512843449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512990846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513025418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513142091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513176878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513189848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513197982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513328837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513514337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515123592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515162448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515278467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515313029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515424326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515511733Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517455314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517544772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517585141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517601510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517612297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517713222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517933474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518033958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518069471Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518088650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518101306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518111033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518119014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518128230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518155729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518197753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518209146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518217247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518232727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518242479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518257521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518270826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518280074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518288937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518296642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518305847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518314748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518324203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518386404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518396238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518404404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518414105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518428969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518437387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518445132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518491204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518506443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518514647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518522672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518529245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518537689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518544653Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518899090Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518957259Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519012111Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519026933Z" level=info msg="containerd successfully booted in 0.022691s"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.498621326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.511032578Z" level=info msg="Loading containers: start."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.643404815Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.708639630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756823239Z" level=warning msg="error locating sandbox id 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd: sandbox 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd not found"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756925263Z" level=info msg="Loading containers: done."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.763915655Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.764081581Z" level=info msg="Daemon has completed initialization"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.785909245Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 12:44:57 ha-073000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.786078565Z" level=info msg="API listen on [::]:2376"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.921954679Z" level=info msg="Processing signal 'terminated'"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923118966Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923233559Z" level=info msg="Daemon shutdown complete"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923326494Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923341810Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 12:44:58 ha-073000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 dockerd[1163]: time="2024-08-16T12:44:59.962962742Z" level=info msg="Starting up"
	Aug 16 12:45:59 ha-073000-m04 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 12:44:56 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.477961385Z" level=info msg="Starting up"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.478651123Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.479149818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=492
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.497251014Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512736016Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512786960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512832906Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512843449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512990846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513025418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513142091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513176878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513189848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513197982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513328837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513514337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515123592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515162448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515278467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515313029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515424326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515511733Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517455314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517544772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517585141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517601510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517612297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517713222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517933474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518033958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518069471Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518088650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518101306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518111033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518119014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518128230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518155729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518197753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518209146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518217247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518232727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518242479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518257521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518270826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518280074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518288937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518296642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518305847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518314748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518324203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518386404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518396238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518404404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518414105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518428969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518437387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518445132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518491204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518506443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518514647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518522672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518529245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518537689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518544653Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518899090Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518957259Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519012111Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519026933Z" level=info msg="containerd successfully booted in 0.022691s"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.498621326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.511032578Z" level=info msg="Loading containers: start."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.643404815Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.708639630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756823239Z" level=warning msg="error locating sandbox id 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd: sandbox 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd not found"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756925263Z" level=info msg="Loading containers: done."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.763915655Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.764081581Z" level=info msg="Daemon has completed initialization"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.785909245Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 12:44:57 ha-073000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.786078565Z" level=info msg="API listen on [::]:2376"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.921954679Z" level=info msg="Processing signal 'terminated'"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923118966Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923233559Z" level=info msg="Daemon shutdown complete"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923326494Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923341810Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 12:44:58 ha-073000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 dockerd[1163]: time="2024-08-16T12:44:59.962962742Z" level=info msg="Starting up"
	Aug 16 12:45:59 ha-073000-m04 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 05:45:59.908450    3700 out.go:270] * 
	* 
	W0816 05:45:59.909700    3700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:45:59.951019    3700 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-073000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-073000 -n ha-073000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 logs -n 25: (3.343393244s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-073000 cp ha-073000-m03:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04:/home/docker/cp-test_ha-073000-m03_ha-073000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m04 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m03_ha-073000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-073000 cp testdata/cp-test.txt                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000:/home/docker/cp-test_ha-073000-m04_ha-073000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000 sudo cat                                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m02:/home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m02 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m03:/home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m03 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-073000 node stop m02 -v=7                                                                                                 | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-073000 node start m02 -v=7                                                                                                | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:39 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-073000 -v=7                                                                                                       | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-073000 -v=7                                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT | 16 Aug 24 05:39 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-073000 --wait=true -v=7                                                                                                | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT | 16 Aug 24 05:42 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-073000                                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT |                     |
	| node    | ha-073000 node delete m03 -v=7                                                                                               | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT | 16 Aug 24 05:42 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-073000 stop -v=7                                                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT | 16 Aug 24 05:43 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-073000 --wait=true                                                                                                     | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:43 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:43:04
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:43:04.564740    3700 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:04.564910    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.564915    3700 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:04.564919    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.565081    3700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:43:04.566585    3700 out.go:352] Setting JSON to false
	I0816 05:43:04.588805    3700 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1962,"bootTime":1723810222,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:43:04.588897    3700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:04.613000    3700 out.go:177] * [ha-073000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:43:04.653806    3700 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:04.653862    3700 notify.go:220] Checking for updates...
	I0816 05:43:04.696885    3700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:04.717792    3700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:43:04.738830    3700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:04.759882    3700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 05:43:04.780629    3700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:04.802633    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:04.803322    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.803409    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.812971    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
	I0816 05:43:04.813324    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.813803    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.813822    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.814047    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.814164    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.814416    3700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:04.814654    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.814677    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.823004    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
	I0816 05:43:04.823356    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.823668    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.823676    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.823881    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.823986    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.852886    3700 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 05:43:04.873686    3700 start.go:297] selected driver: hyperkit
	I0816 05:43:04.873736    3700 start.go:901] validating driver "hyperkit" against &{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.873963    3700 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:04.874147    3700 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.874351    3700 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 05:43:04.884210    3700 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 05:43:04.888002    3700 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.888025    3700 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 05:43:04.890692    3700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:04.890731    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:04.890738    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:04.890804    3700 start.go:340] cluster config:
	{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.890902    3700 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.933833    3700 out.go:177] * Starting "ha-073000" primary control-plane node in "ha-073000" cluster
	I0816 05:43:04.954485    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:04.954567    3700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 05:43:04.954587    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:04.954798    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:04.954819    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:04.955011    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:04.955870    3700 start.go:360] acquireMachinesLock for ha-073000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:04.955995    3700 start.go:364] duration metric: took 100.576µs to acquireMachinesLock for "ha-073000"
	I0816 05:43:04.956044    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:04.956062    3700 fix.go:54] fixHost starting: 
	I0816 05:43:04.956492    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.956518    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.965467    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
	I0816 05:43:04.965836    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.966195    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.966210    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.966502    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.966647    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.966748    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:43:04.966849    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.966924    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3625
	I0816 05:43:04.967937    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:04.967979    3700 fix.go:112] recreateIfNeeded on ha-073000: state=Stopped err=<nil>
	I0816 05:43:04.968006    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	W0816 05:43:04.968088    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:05.010683    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000" ...
	I0816 05:43:05.031624    3700 main.go:141] libmachine: (ha-073000) Calling .Start
	I0816 05:43:05.031872    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.031897    3700 main.go:141] libmachine: (ha-073000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid
	I0816 05:43:05.033643    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:05.033659    3700 main.go:141] libmachine: (ha-073000) DBG | pid 3625 is in state "Stopped"
	I0816 05:43:05.033683    3700 main.go:141] libmachine: (ha-073000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid...
	I0816 05:43:05.034080    3700 main.go:141] libmachine: (ha-073000) DBG | Using UUID 449fd9a3-1c71-4e9a-9271-363ec4bdb253
	I0816 05:43:05.149249    3700 main.go:141] libmachine: (ha-073000) DBG | Generated MAC 36:31:25:a5:a2:ed
	I0816 05:43:05.149291    3700 main.go:141] libmachine: (ha-073000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:05.149397    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149433    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149473    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "449fd9a3-1c71-4e9a-9271-363ec4bdb253", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:05.149540    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 449fd9a3-1c71-4e9a-9271-363ec4bdb253 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:05.149556    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:05.150961    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Pid is 3714
	I0816 05:43:05.151298    3700 main.go:141] libmachine: (ha-073000) DBG | Attempt 0
	I0816 05:43:05.151311    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.151435    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:43:05.153225    3700 main.go:141] libmachine: (ha-073000) DBG | Searching for 36:31:25:a5:a2:ed in /var/db/dhcpd_leases ...
	I0816 05:43:05.153302    3700 main.go:141] libmachine: (ha-073000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:05.153320    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:05.153335    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:05.153348    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:05.153395    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09a1c}
	I0816 05:43:05.153412    3700 main.go:141] libmachine: (ha-073000) DBG | Found match: 36:31:25:a5:a2:ed
	I0816 05:43:05.153421    3700 main.go:141] libmachine: (ha-073000) Calling .GetConfigRaw
	I0816 05:43:05.153453    3700 main.go:141] libmachine: (ha-073000) DBG | IP: 192.169.0.5
	I0816 05:43:05.154140    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:05.154367    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:05.154767    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:05.154779    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:05.154938    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:05.155074    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:05.155194    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155310    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155408    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:05.155550    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:05.155750    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:05.155759    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:05.159119    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:05.211364    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:05.212077    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.212095    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.212103    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.212109    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.591470    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:05.591483    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:05.706454    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.706476    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.706490    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.706501    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.707461    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:05.707472    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:11.286594    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:43:11.286691    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:43:11.286700    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:43:11.310519    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:43:40.225322    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:40.225337    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225482    3700 buildroot.go:166] provisioning hostname "ha-073000"
	I0816 05:43:40.225493    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225593    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.225692    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.225793    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225892    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225986    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.226106    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.226271    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.226298    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000 && echo "ha-073000" | sudo tee /etc/hostname
	I0816 05:43:40.294551    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000
	
	I0816 05:43:40.294568    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.294702    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.294805    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.294917    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.295018    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.295131    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.295293    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.295303    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:40.357437    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:40.357455    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:40.357467    3700 buildroot.go:174] setting up certificates
	I0816 05:43:40.357475    3700 provision.go:84] configureAuth start
	I0816 05:43:40.357482    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.357611    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:40.357710    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.357800    3700 provision.go:143] copyHostCerts
	I0816 05:43:40.357831    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.357900    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:40.357908    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.358056    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:40.358263    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358303    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:40.358308    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358383    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:40.358527    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358564    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:40.358575    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358655    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:40.358790    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000 san=[127.0.0.1 192.169.0.5 ha-073000 localhost minikube]
	I0816 05:43:40.668742    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:40.668797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:40.668812    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.669020    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.669115    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.669208    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.669298    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:40.705870    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:40.705942    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:40.727099    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:40.727157    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 05:43:40.747334    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:40.747393    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:43:40.766795    3700 provision.go:87] duration metric: took 409.312981ms to configureAuth
	I0816 05:43:40.766810    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:40.766972    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:40.766985    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:40.767112    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.767214    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.767307    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767377    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767456    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.767585    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.767712    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.767720    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:40.823994    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:40.824009    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:40.824077    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:40.824089    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.824227    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.824329    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824430    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824516    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.824679    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.824819    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.824862    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:40.894312    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:40.894335    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.894465    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.894566    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894651    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894725    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.894858    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.895012    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.895025    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:43:42.619681    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:43:42.619696    3700 machine.go:96] duration metric: took 37.465655472s to provisionDockerMachine
	I0816 05:43:42.619707    3700 start.go:293] postStartSetup for "ha-073000" (driver="hyperkit")
	I0816 05:43:42.619714    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:43:42.619724    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.619902    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:43:42.619926    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.620017    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.620114    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.620221    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.620305    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.656447    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:43:42.659759    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:43:42.659773    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:43:42.659872    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:43:42.660059    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:43:42.660065    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:43:42.660269    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:43:42.667667    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:42.687880    3700 start.go:296] duration metric: took 68.167584ms for postStartSetup
	I0816 05:43:42.687899    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.688070    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:43:42.688083    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.688171    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.688267    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.688367    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.688456    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.722698    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:43:42.722761    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:43:42.776641    3700 fix.go:56] duration metric: took 37.82132494s for fixHost
	I0816 05:43:42.776663    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.776810    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.776931    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777033    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777125    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.777253    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:42.777390    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:42.777397    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:43:42.836399    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812222.925313054
	
	I0816 05:43:42.836411    3700 fix.go:216] guest clock: 1723812222.925313054
	I0816 05:43:42.836417    3700 fix.go:229] Guest: 2024-08-16 05:43:42.925313054 -0700 PDT Remote: 2024-08-16 05:43:42.776654 -0700 PDT m=+38.247448415 (delta=148.659054ms)
	I0816 05:43:42.836434    3700 fix.go:200] guest clock delta is within tolerance: 148.659054ms
	I0816 05:43:42.836437    3700 start.go:83] releasing machines lock for "ha-073000", held for 37.881174383s
	I0816 05:43:42.836457    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.836598    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:42.836699    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837049    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837160    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837249    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:43:42.837284    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837297    3700 ssh_runner.go:195] Run: cat /version.json
	I0816 05:43:42.837308    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837399    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837413    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837511    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837521    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837609    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837623    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837690    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.837711    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.913686    3700 ssh_runner.go:195] Run: systemctl --version
	I0816 05:43:42.918889    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:43:42.923312    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:43:42.923351    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:43:42.935697    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:43:42.935707    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:42.935801    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:42.953681    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:43:42.962535    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:43:42.971266    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:43:42.971307    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:43:42.979934    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:42.988664    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:43:42.997290    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:43.005918    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:43:43.014721    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:43:43.023404    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:43:43.032084    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:43:43.040766    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:43:43.048727    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:43:43.056628    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.160133    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:43:43.175551    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:43.175624    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:43:43.187204    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.198626    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:43:43.214407    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.226374    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.237460    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:43:43.257683    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.271060    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:43.289045    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:43:43.291949    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:43:43.299258    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:43:43.312470    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:43:43.422601    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:43:43.528683    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:43:43.528764    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:43:43.542650    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.653228    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:43:46.028721    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.375520385s)
	I0816 05:43:46.028781    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:43:46.040150    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.049993    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:43:46.143000    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:43:46.256755    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.354748    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:43:46.369090    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.380481    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.481851    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:43:46.546753    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:43:46.546835    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:43:46.551170    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:43:46.551219    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:43:46.554224    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:43:46.581136    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:43:46.581204    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.600242    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.641436    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:43:46.641483    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:46.641865    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:43:46.646502    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.656383    3700 kubeadm.go:883] updating cluster {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 05:43:46.656461    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:46.656510    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.670426    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.670438    3700 docker.go:615] Images already preloaded, skipping extraction
	I0816 05:43:46.670515    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.682547    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.682568    3700 cache_images.go:84] Images are preloaded, skipping loading
	I0816 05:43:46.682577    3700 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 05:43:46.682650    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:43:46.682717    3700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:43:46.717612    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:46.717631    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:46.717641    3700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:43:46.717661    3700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-073000 NodeName:ha-073000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:43:46.717752    3700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-073000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:43:46.717766    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:43:46.717818    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:43:46.732805    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:43:46.732879    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:43:46.732932    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:43:46.744741    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:43:46.744797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 05:43:46.752198    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 05:43:46.766525    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:43:46.779788    3700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 05:43:46.793230    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:43:46.806345    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:43:46.809072    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.818297    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.921223    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:43:46.935952    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.5
	I0816 05:43:46.935964    3700 certs.go:194] generating shared ca certs ...
	I0816 05:43:46.935976    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.936150    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:43:46.936228    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:43:46.936237    3700 certs.go:256] generating profile certs ...
	I0816 05:43:46.936323    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:43:46.936347    3700 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e
	I0816 05:43:46.936361    3700 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 05:43:46.977158    3700 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e ...
	I0816 05:43:46.977174    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e: {Name:mk8d6f44d0e237393798a574888fbd7c16b75ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977520    3700 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e ...
	I0816 05:43:46.977530    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e: {Name:mk0b98c1e535c8fd1781c44e6f22509b6b916e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977744    3700 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt
	I0816 05:43:46.977955    3700 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key
	I0816 05:43:46.978212    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:43:46.978221    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:43:46.978248    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:43:46.978268    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:43:46.978286    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:43:46.978305    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:43:46.978327    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:43:46.978345    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:43:46.978363    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:43:46.978461    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:43:46.978507    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:43:46.978516    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:43:46.978550    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:43:46.978580    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:43:46.978610    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:43:46.978674    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:46.978708    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:46.978729    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:43:46.978748    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:43:46.979212    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:43:47.006926    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:43:47.033030    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:43:47.064204    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:43:47.096328    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:43:47.140607    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:43:47.183767    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:43:47.225875    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:43:47.272651    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:43:47.321871    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:43:47.361863    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:43:47.392530    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:43:47.413203    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:43:47.419281    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:43:47.429288    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437638    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437698    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.445809    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:43:47.456922    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:43:47.468355    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473399    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473439    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.477636    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:43:47.487065    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:43:47.496174    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499485    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499517    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.503664    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:43:47.512642    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:43:47.516083    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:43:47.520349    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:43:47.524473    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:43:47.528697    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:43:47.532807    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:43:47.536987    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:43:47.541120    3700 kubeadm.go:392] StartCluster: {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:47.541240    3700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:43:47.554428    3700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:43:47.562930    3700 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:43:47.562950    3700 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:43:47.563002    3700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:43:47.571138    3700 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:43:47.571458    3700 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-073000" does not appear in /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.571540    3700 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1009/kubeconfig needs updating (will repair): [kubeconfig missing "ha-073000" cluster setting kubeconfig missing "ha-073000" context setting]
	I0816 05:43:47.571730    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.572561    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.572756    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:43:47.573052    3700 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 05:43:47.573236    3700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:43:47.581202    3700 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 05:43:47.581215    3700 kubeadm.go:597] duration metric: took 18.259849ms to restartPrimaryControlPlane
	I0816 05:43:47.581220    3700 kubeadm.go:394] duration metric: took 40.104743ms to StartCluster
	I0816 05:43:47.581228    3700 settings.go:142] acquiring lock: {Name:mkb3c8aac25c21025142737c3a236d96f65e9fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581298    3700 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.581626    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581845    3700 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:47.581857    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:43:47.581865    3700 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:43:47.581987    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.603872    3700 out.go:177] * Enabled addons: 
	I0816 05:43:47.645376    3700 addons.go:510] duration metric: took 63.484341ms for enable addons: enabled=[]
	I0816 05:43:47.645417    3700 start.go:246] waiting for cluster config update ...
	I0816 05:43:47.645429    3700 start.go:255] writing updated cluster config ...
	I0816 05:43:47.667512    3700 out.go:201] 
	I0816 05:43:47.689977    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.690106    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.712362    3700 out.go:177] * Starting "ha-073000-m02" control-plane node in "ha-073000" cluster
	I0816 05:43:47.754492    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:47.754528    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:47.754704    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:47.754723    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:47.754841    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.755736    3700 start.go:360] acquireMachinesLock for ha-073000-m02: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:47.755840    3700 start.go:364] duration metric: took 80.235µs to acquireMachinesLock for "ha-073000-m02"
	I0816 05:43:47.755868    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:47.755877    3700 fix.go:54] fixHost starting: m02
	I0816 05:43:47.756330    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:47.756357    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:47.765501    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0816 05:43:47.765944    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:47.766357    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:47.766399    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:47.766686    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:47.766840    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.766960    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:43:47.767043    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.767163    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3630
	I0816 05:43:47.768076    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.768103    3700 fix.go:112] recreateIfNeeded on ha-073000-m02: state=Stopped err=<nil>
	I0816 05:43:47.768113    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	W0816 05:43:47.768243    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:47.789281    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m02" ...
	I0816 05:43:47.810495    3700 main.go:141] libmachine: (ha-073000-m02) Calling .Start
	I0816 05:43:47.810746    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.810809    3700 main.go:141] libmachine: (ha-073000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid
	I0816 05:43:47.812579    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.812591    3700 main.go:141] libmachine: (ha-073000-m02) DBG | pid 3630 is in state "Stopped"
	I0816 05:43:47.812606    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid...
	I0816 05:43:47.812915    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Using UUID 2ecbd3fa-135d-470f-9281-b78e2fd82941
	I0816 05:43:47.840853    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Generated MAC 3a:16:de:25:18:f9
	I0816 05:43:47.840881    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:47.841024    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841093    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841131    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2ecbd3fa-135d-470f-9281-b78e2fd82941", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:47.841173    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2ecbd3fa-135d-470f-9281-b78e2fd82941 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:47.841196    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:47.842666    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Pid is 3719
	I0816 05:43:47.842981    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Attempt 0
	I0816 05:43:47.843001    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.843149    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3719
	I0816 05:43:47.845190    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Searching for 3a:16:de:25:18:f9 in /var/db/dhcpd_leases ...
	I0816 05:43:47.845245    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:47.845265    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:43:47.845294    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:47.845311    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:47.845326    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:47.845337    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found match: 3a:16:de:25:18:f9
	I0816 05:43:47.845356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetConfigRaw
	I0816 05:43:47.845364    3700 main.go:141] libmachine: (ha-073000-m02) DBG | IP: 192.169.0.6
	I0816 05:43:47.846051    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:47.846244    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.846807    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:47.846817    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.846948    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:47.847069    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:47.847170    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847430    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:47.847576    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:47.847744    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:47.847752    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:47.850417    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:47.859543    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:47.860408    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:47.860422    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:47.860431    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:47.860467    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.243712    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:48.243733    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:48.358576    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:48.358605    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:48.358619    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:48.358631    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.359399    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:48.359409    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:53.958154    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 05:43:53.958240    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 05:43:53.958254    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 05:43:53.983312    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 05:43:58.907700    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:58.907714    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907845    3700 buildroot.go:166] provisioning hostname "ha-073000-m02"
	I0816 05:43:58.907881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907973    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.908072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.908173    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.908483    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.908630    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.908640    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m02 && echo "ha-073000-m02" | sudo tee /etc/hostname
	I0816 05:43:58.968566    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m02
	
	I0816 05:43:58.968580    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.968718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.968818    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.968913    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.969011    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.969127    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.969267    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.969280    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:59.024122    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:59.024136    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:59.024144    3700 buildroot.go:174] setting up certificates
	I0816 05:43:59.024150    3700 provision.go:84] configureAuth start
	I0816 05:43:59.024156    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:59.024280    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:59.024383    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.024473    3700 provision.go:143] copyHostCerts
	I0816 05:43:59.024501    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024550    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:59.024556    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024690    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:59.024885    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.024915    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:59.024920    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.025027    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:59.025190    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025223    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:59.025228    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025295    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:59.025446    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m02 san=[127.0.0.1 192.169.0.6 ha-073000-m02 localhost minikube]
	I0816 05:43:59.071749    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:59.071798    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:59.071819    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.071951    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.072035    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.072105    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.072191    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:43:59.104582    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:59.104649    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:59.123906    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:59.123983    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:43:59.142982    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:59.143045    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:43:59.162066    3700 provision.go:87] duration metric: took 137.911741ms to configureAuth
	I0816 05:43:59.162078    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:59.162258    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:59.162271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:59.162402    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.162489    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.162572    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162650    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162733    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.162851    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.162983    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.162993    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:59.210853    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:59.210865    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:59.210945    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:59.210957    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.211118    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.211219    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211310    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211387    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.211514    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.211649    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.211694    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:59.271471    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:59.271487    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.271647    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.271739    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271846    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271935    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.272053    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.272197    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.272208    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:00.927291    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:00.927305    3700 machine.go:96] duration metric: took 13.080748192s to provisionDockerMachine
	I0816 05:44:00.927312    3700 start.go:293] postStartSetup for "ha-073000-m02" (driver="hyperkit")
	I0816 05:44:00.927320    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:00.927330    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:00.927511    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:00.927525    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:00.927652    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:00.927731    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:00.927829    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:00.927905    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:00.960594    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:00.964512    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:00.964524    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:00.964627    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:00.964771    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:00.964778    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:00.964934    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:00.975551    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:01.005513    3700 start.go:296] duration metric: took 78.192885ms for postStartSetup
	I0816 05:44:01.005559    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.005745    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:01.005758    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.005896    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.005983    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.006072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.006164    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.040756    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:01.040818    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:01.075264    3700 fix.go:56] duration metric: took 13.319647044s for fixHost
	I0816 05:44:01.075289    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.075435    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.075528    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075613    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.075847    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:01.075998    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:44:01.076006    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:01.125972    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812240.969906147
	
	I0816 05:44:01.125983    3700 fix.go:216] guest clock: 1723812240.969906147
	I0816 05:44:01.125988    3700 fix.go:229] Guest: 2024-08-16 05:44:00.969906147 -0700 PDT Remote: 2024-08-16 05:44:01.075279 -0700 PDT m=+56.546434198 (delta=-105.372853ms)
	I0816 05:44:01.125998    3700 fix.go:200] guest clock delta is within tolerance: -105.372853ms
	I0816 05:44:01.126002    3700 start.go:83] releasing machines lock for "ha-073000-m02", held for 13.370412469s
	I0816 05:44:01.126019    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.126142    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:01.147724    3700 out.go:177] * Found network options:
	I0816 05:44:01.167556    3700 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 05:44:01.188682    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.188720    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189649    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189985    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:01.190020    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	W0816 05:44:01.190130    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.190184    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190263    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:01.190286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.190352    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190515    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.190522    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190713    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.190722    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190891    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.191002    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	W0816 05:44:01.219673    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:01.219729    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:01.266010    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:01.266030    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.266137    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.282065    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:01.291072    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:01.299924    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.299972    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:01.308888    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.317715    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:01.326478    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.335362    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:01.344565    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:01.353443    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:01.362391    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:01.371153    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:01.379211    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:01.387397    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.485288    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:01.504163    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.504230    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:01.519289    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.533468    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:01.549919    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.560311    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.570439    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:01.589516    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.599936    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.614849    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:01.617987    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:01.625242    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:01.638690    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:01.731621    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:01.840350    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.840371    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:01.854317    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.960384    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:44:04.269941    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.309582879s)
	I0816 05:44:04.270007    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:44:04.280320    3700 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:44:04.292872    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.303371    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:44:04.393390    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:44:04.502895    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.604917    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:44:04.618462    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.629172    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.732241    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:44:04.796052    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:44:04.796135    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:44:04.800527    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:44:04.800578    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:44:04.803568    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:44:04.832000    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:44:04.832069    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.850869    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.890177    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:44:04.933118    3700 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 05:44:04.954934    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:04.955381    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:44:04.959881    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:04.969321    3700 mustload.go:65] Loading cluster: ha-073000
	I0816 05:44:04.969488    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:04.969741    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.969756    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.978313    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52098
	I0816 05:44:04.978649    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.979005    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.979022    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.979231    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.979362    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:44:04.979460    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:04.979527    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:44:04.980457    3700 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:44:04.980703    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.980719    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.989380    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52100
	I0816 05:44:04.989872    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.990229    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.990239    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.990441    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.990567    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:44:04.990667    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.6
	I0816 05:44:04.990673    3700 certs.go:194] generating shared ca certs ...
	I0816 05:44:04.990681    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:44:04.990819    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:44:04.990876    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:44:04.990885    3700 certs.go:256] generating profile certs ...
	I0816 05:44:04.990968    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:44:04.991052    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.852e3a00
	I0816 05:44:04.991104    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:44:04.991115    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:44:04.991137    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:44:04.991158    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:44:04.991181    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:44:04.991203    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:44:04.991224    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:44:04.991243    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:44:04.991260    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:44:04.991336    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:44:04.991373    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:44:04.991382    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:44:04.991415    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:44:04.991446    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:44:04.991475    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:44:04.991545    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:04.991577    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:44:04.991598    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:04.991616    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:44:04.991641    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:44:04.991732    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:44:04.991816    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:44:04.991895    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:44:04.991976    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:44:05.018674    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 05:44:05.021887    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 05:44:05.030501    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 05:44:05.033440    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 05:44:05.041955    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 05:44:05.044846    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 05:44:05.053721    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 05:44:05.056775    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 05:44:05.065337    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 05:44:05.068254    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 05:44:05.076761    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 05:44:05.079704    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 05:44:05.088144    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:44:05.108529    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:44:05.128319    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:44:05.148205    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:44:05.168044    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:44:05.187959    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:44:05.207850    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:44:05.227864    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:44:05.247806    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:44:05.267586    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:44:05.287321    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:44:05.307517    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 05:44:05.321001    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 05:44:05.334635    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 05:44:05.348115    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 05:44:05.361521    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 05:44:05.375128    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 05:44:05.388391    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 05:44:05.402014    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:44:05.406108    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:44:05.414347    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417650    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417685    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.421754    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:44:05.429962    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:44:05.438138    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441411    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441444    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.445615    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:44:05.453740    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:44:05.462021    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465413    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465453    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.469602    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:44:05.477722    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:44:05.481045    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:44:05.485278    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:44:05.489478    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:44:05.493769    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:44:05.497993    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:44:05.502305    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:44:05.506534    3700 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 05:44:05.506585    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:44:05.506599    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:44:05.506631    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:44:05.518840    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:44:05.518872    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:44:05.518938    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:44:05.527488    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:44:05.527548    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 05:44:05.535755    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 05:44:05.549218    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:44:05.562474    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:44:05.575901    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:44:05.578825    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:05.588727    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.694671    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.710202    3700 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:44:05.710412    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:05.731897    3700 out.go:177] * Verifying Kubernetes components...
	I0816 05:44:05.773259    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.888127    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.905019    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:44:05.905207    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 05:44:05.905240    3700 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 05:44:05.905409    3700 node_ready.go:35] waiting up to 6m0s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:05.905490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:05.905495    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:05.905503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:05.905507    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.204653    3700 round_trippers.go:574] Response Status: 200 OK in 8299 milliseconds
	I0816 05:44:14.205215    3700 node_ready.go:49] node "ha-073000-m02" has status "Ready":"True"
	I0816 05:44:14.205228    3700 node_ready.go:38] duration metric: took 8.299966036s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:14.205235    3700 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:14.205277    3700 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:44:14.205286    3700 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:44:14.205323    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:14.205327    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.205333    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.205336    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.223247    3700 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0816 05:44:14.231122    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.231187    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2fdpw
	I0816 05:44:14.231192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.231198    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.231208    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.240205    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:14.240681    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.240689    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.240695    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.240699    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.247571    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:14.247971    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.247981    3700 pod_ready.go:82] duration metric: took 16.842454ms for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.247988    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.248023    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vf22s
	I0816 05:44:14.248028    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.248034    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.248038    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252093    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.252500    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.252508    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.252513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.255102    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.255471    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.255482    3700 pod_ready.go:82] duration metric: took 7.488195ms for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255489    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255538    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000
	I0816 05:44:14.255543    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.255549    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.255554    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257423    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.257786    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.257793    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.257798    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257802    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:14.261582    3700 pod_ready.go:93] pod "etcd-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.261592    3700 pod_ready.go:82] duration metric: took 6.098581ms for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261599    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261644    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m02
	I0816 05:44:14.261649    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.261654    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261658    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.264072    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.264627    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:14.264635    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.264640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.264645    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.267306    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.267636    3700 pod_ready.go:93] pod "etcd-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.267645    3700 pod_ready.go:82] duration metric: took 6.041319ms for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267652    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267706    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m03
	I0816 05:44:14.267711    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.267716    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.267726    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.269558    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.406286    3700 request.go:632] Waited for 136.053726ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406320    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406325    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.406330    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.406334    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.412790    3700 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0816 05:44:14.412989    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413000    3700 pod_ready.go:82] duration metric: took 145.343663ms for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:14.413019    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413037    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.606275    3700 request.go:632] Waited for 193.204942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606325    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606330    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.606342    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.606346    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.611263    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.806263    3700 request.go:632] Waited for 194.483786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806300    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806306    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.806312    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.806316    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.808457    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.809016    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.809026    3700 pod_ready.go:82] duration metric: took 395.988936ms for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.809033    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:15.005594    3700 request.go:632] Waited for 196.505275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005624    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005630    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.005637    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.005640    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.010212    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.206584    3700 request.go:632] Waited for 195.946236ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206645    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206685    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.206691    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.206695    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.211350    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.405410    3700 request.go:632] Waited for 95.393387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405469    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405474    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.405479    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.405483    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.408080    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.605592    3700 request.go:632] Waited for 196.04685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605628    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605634    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.605640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.605644    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.607860    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.810998    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.811014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.811021    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.811029    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.813293    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:16.005626    3700 request.go:632] Waited for 191.969847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005743    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005754    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.005765    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.005773    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.008807    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.309801    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.309825    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.309836    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.309844    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.313121    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.407323    3700 request.go:632] Waited for 93.416086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407387    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407397    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.407409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.407424    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.410882    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.810461    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.810486    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.810498    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.810504    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.813546    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.814282    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.814289    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.814295    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.814298    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.816149    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:16.816456    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:17.309900    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.309921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.309932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.309937    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.312735    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:17.313209    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.313218    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.313223    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.313233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.314796    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:17.809685    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.809718    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.809758    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.809767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.813579    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:17.814147    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.814157    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.814165    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.814169    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.815986    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.309824    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.309839    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.309845    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.309850    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.312500    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:18.312950    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.312958    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.312964    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.312968    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.317556    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.811340    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.811362    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.811380    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.811389    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.815578    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.816331    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.816338    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.816343    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.816347    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.818287    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.818637    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:19.309154    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.309213    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.309226    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.309244    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.313107    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.313580    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.313589    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.313597    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.313601    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.315208    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:19.810298    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.810320    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.810332    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.810338    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.813934    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.814561    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.814571    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.814579    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.814589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.816289    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.309290    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.309312    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.309322    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.309328    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.313244    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.313715    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.313724    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.313731    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.313737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.315554    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.809680    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.809710    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.809723    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.809735    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.813009    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.813665    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.813674    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.813682    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.813686    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.815508    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.309619    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.309640    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.309667    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.309675    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.313585    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.314167    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.314174    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.314179    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.314182    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.315676    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.316053    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:21.809228    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.809250    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.809261    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.809267    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.812952    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.813489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.813500    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.813508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.813512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.815094    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.310261    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.310287    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.310299    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.310305    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.314627    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:22.314992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.314999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.315005    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.315008    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.316747    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.810493    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.810515    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.810526    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.810532    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814082    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:22.814652    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.814660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.814666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814670    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.816180    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.310190    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.310217    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.310228    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.310235    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.314496    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:23.314922    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.314929    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.314935    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.314939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.316481    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.316841    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:23.809175    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.809187    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.809202    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.809207    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.811160    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.811560    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.811568    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.811574    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.811578    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.814714    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.309762    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:24.309784    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.309796    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.309802    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.313492    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.314086    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.314097    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.314106    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.314111    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.315684    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.316026    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:24.316036    3700 pod_ready.go:82] duration metric: took 9.507184684s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316045    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316078    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m03
	I0816 05:44:24.316086    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.316091    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.316095    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.317489    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.317864    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:24.317872    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.317877    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.317881    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.319230    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:24.319275    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319288    3700 pod_ready.go:82] duration metric: took 3.236554ms for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.319295    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319299    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.319330    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000
	I0816 05:44:24.319335    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.319340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.319344    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.320953    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.321429    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:24.321437    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.321442    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.321446    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.322965    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.323320    3700 pod_ready.go:98] node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323329    3700 pod_ready.go:82] duration metric: took 4.023708ms for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.323334    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323339    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.323367    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.323371    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.323379    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.323384    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.324781    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.325216    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.325223    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.325229    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.325233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.326748    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.824459    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.824484    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.824494    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.824506    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828252    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.828701    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.828712    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.828719    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828723    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.830277    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.323827    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.323852    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.323864    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.323877    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327155    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:25.327737    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.327744    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.327750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327754    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.329624    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.824109    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.824127    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.824136    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.824142    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.826476    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:25.827100    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.827108    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.827113    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.827117    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.828738    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.323567    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.323611    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.323617    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.323621    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.325453    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.325886    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.325894    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.325900    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.325904    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.327286    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.327610    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:26.823816    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.823841    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.823852    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.823860    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.827261    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:26.827821    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.827831    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.827839    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.827844    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.829686    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.323970    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.323996    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.324008    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.324015    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.327573    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.328047    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.328056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.328063    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.328067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.329875    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.823992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.824023    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.824082    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.824091    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.827309    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.827980    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.827987    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.827993    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.827998    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.829445    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:28.324903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.324920    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.324929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.324933    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.327085    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.327489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.327497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.327503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.327506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.329732    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.330047    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:28.823366    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.823382    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.823401    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.823422    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.846246    3700 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0816 05:44:28.846781    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.846789    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.846795    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.846803    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.855350    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:29.324024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.324057    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.324064    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.324067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.326984    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.327546    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.327553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.327559    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.327563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.330445    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.824279    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.824299    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.824306    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.824310    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.827888    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:29.828505    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.828512    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.828518    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.828522    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.830193    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.323608    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.323627    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.323635    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.323639    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327262    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:30.327789    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.327798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.327803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327807    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.329683    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.330034    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:30.823965    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.823999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.824072    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.824083    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.828534    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:30.829026    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.829034    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.829040    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.829044    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.830921    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.324089    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:31.324113    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.324130    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.324137    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.328896    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:31.329571    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:31.329579    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.329585    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.329589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.331878    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:31.332446    3700 pod_ready.go:93] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:31.332455    3700 pod_ready.go:82] duration metric: took 7.009249215s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332462    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332502    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m03
	I0816 05:44:31.332507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.332512    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.332516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.334084    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.334465    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:31.334472    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.334477    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.334480    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.335893    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:31.335965    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335979    3700 pod_ready.go:82] duration metric: took 3.51153ms for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:31.335986    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335991    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.336024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.336029    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.336035    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.336038    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.337516    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.338235    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.338242    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.338248    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.338254    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.339975    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.837844    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.837869    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.837881    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.837927    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841316    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:31.841903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.841910    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.841916    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841919    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.843493    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.336771    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.336798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.336809    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.336816    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.340935    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:32.341412    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.341420    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.341426    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.341429    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.342957    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.838157    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.838212    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.838225    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.838232    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.841711    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:32.842249    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.842259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.842267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.842272    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.843815    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.337329    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.337354    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.337366    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.337372    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.341232    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.341870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.341877    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.341883    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.341887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.343419    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.343689    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:33.836128    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.836154    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.836164    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.836170    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.840006    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.840641    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.840650    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.840658    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.840663    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.842504    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.337618    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.337683    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.337693    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.337698    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.339996    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.340499    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.340507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.340513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.340517    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.342040    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.836185    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.836258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.836268    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.836274    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.838913    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.839391    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.839398    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.839404    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.839409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.840888    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:35.336164    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.336192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.336242    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.336256    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.344590    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:35.345076    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.345083    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.345089    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.345106    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.351725    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:35.352079    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:35.838186    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.838207    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.838219    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.838225    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.841779    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:35.842361    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.842368    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.842373    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.842376    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.844076    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.336349    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.336372    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.336387    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.336393    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.339759    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.340248    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.340258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.340267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.340273    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.341840    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.836286    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.836309    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.836320    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.836326    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.839632    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.840490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.840497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.840503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.840506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.842131    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.337695    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.337717    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.337729    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.337736    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.341389    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.341954    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.341964    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.341972    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.341977    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.343432    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.837030    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.837056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.837073    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.837092    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.840202    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.840916    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.840924    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.840929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.840934    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.842593    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.843036    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:38.336396    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.336421    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.336432    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.336441    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.340051    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.340807    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.340818    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.340826    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.340831    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.342328    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:38.836968    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.836993    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.837004    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.837009    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.840369    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.840942    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.840953    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.840961    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.840966    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.842959    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.337347    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.337374    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.337385    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.337391    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.340872    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.341545    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.341553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.341560    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.341563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.343528    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.836514    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.836585    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.836604    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.836610    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.839854    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.840266    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.840275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.840282    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.840287    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.841976    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.337117    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.337140    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.337151    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.337157    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.340623    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:40.341081    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.341089    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.341095    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.341099    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.342480    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.342868    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:40.836255    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.836275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.836287    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.836294    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.839119    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:40.839650    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.839660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.839666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.839671    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.841284    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.336308    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.336328    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.336340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.336356    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.339424    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.339982    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.339990    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.339995    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.339999    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.341644    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.837468    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.837489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.837501    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.837508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.841276    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.842038    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.842045    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.842051    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.842055    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.843559    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.336716    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.336731    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.336737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.336740    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.342330    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:42.343356    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.343364    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.343370    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.343373    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.351635    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:42.352611    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:42.836673    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.836700    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.836711    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.836719    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.840138    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:42.840742    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.840753    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.840762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.840767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.842386    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.842727    3700 pod_ready.go:93] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.842736    3700 pod_ready.go:82] duration metric: took 11.506966083s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842743    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842773    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27jt
	I0816 05:44:42.842778    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.842783    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.842788    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.844352    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.844828    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:42.844835    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.844841    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.844845    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.846280    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.846557    3700 pod_ready.go:93] pod "kube-proxy-c27jt" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.846565    3700 pod_ready.go:82] duration metric: took 3.817397ms for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846572    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846601    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr2c8
	I0816 05:44:42.846605    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.846612    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.846615    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.848062    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.848495    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:42.848503    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.848509    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.848512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.849798    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:42.849858    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849868    3700 pod_ready.go:82] duration metric: took 3.291408ms for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:42.849874    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849879    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.849912    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wcgdv
	I0816 05:44:42.849917    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.849922    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.849925    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.851357    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.851732    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m04
	I0816 05:44:42.851740    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.851745    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.851750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.853123    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.853436    3700 pod_ready.go:93] pod "kube-proxy-wcgdv" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.853444    3700 pod_ready.go:82] duration metric: took 3.55866ms for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853450    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853478    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000
	I0816 05:44:42.853482    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.853488    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.853492    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.854845    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.855143    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.855150    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.855155    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.855160    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.856490    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.856772    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.856781    3700 pod_ready.go:82] duration metric: took 3.32627ms for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.856793    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.037823    3700 request.go:632] Waited for 180.948071ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037884    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037896    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.037908    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.037918    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.041274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.238753    3700 request.go:632] Waited for 196.999605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238909    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.238932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.238939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.242465    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.242850    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:43.242864    3700 pod_ready.go:82] duration metric: took 386.071689ms for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.242873    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.436910    3700 request.go:632] Waited for 193.992761ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437002    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.437025    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.437033    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.439940    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:43.637222    3700 request.go:632] Waited for 196.770029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637254    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.637265    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.637270    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.638883    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:43.638942    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638952    3700 pod_ready.go:82] duration metric: took 396.081081ms for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:43.638959    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638964    3700 pod_ready.go:39] duration metric: took 29.434296561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:43.638986    3700 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:44:43.639045    3700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:44:43.650685    3700 api_server.go:72] duration metric: took 37.941199778s to wait for apiserver process to appear ...
	I0816 05:44:43.650696    3700 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:44:43.650717    3700 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 05:44:43.653719    3700 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 05:44:43.653750    3700 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 05:44:43.653755    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.653762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.653766    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.654323    3700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 05:44:43.654424    3700 api_server.go:141] control plane version: v1.31.0
	I0816 05:44:43.654434    3700 api_server.go:131] duration metric: took 3.733932ms to wait for apiserver health ...
	I0816 05:44:43.654442    3700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 05:44:43.838724    3700 request.go:632] Waited for 184.226134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838846    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838861    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.838873    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.838887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.845134    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:43.850534    3700 system_pods.go:59] 26 kube-system pods found
	I0816 05:44:43.850556    3700 system_pods.go:61] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850563    3700 system_pods.go:61] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850568    3700 system_pods.go:61] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:43.850572    3700 system_pods.go:61] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:43.850575    3700 system_pods.go:61] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:43.850577    3700 system_pods.go:61] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:43.850582    3700 system_pods.go:61] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:43.850586    3700 system_pods.go:61] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:43.850589    3700 system_pods.go:61] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:43.850592    3700 system_pods.go:61] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:43.850594    3700 system_pods.go:61] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:43.850597    3700 system_pods.go:61] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:43.850600    3700 system_pods.go:61] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:43.850603    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:43.850605    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:43.850608    3700 system_pods.go:61] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:43.850611    3700 system_pods.go:61] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:43.850613    3700 system_pods.go:61] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:43.850616    3700 system_pods.go:61] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:43.850618    3700 system_pods.go:61] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:43.850623    3700 system_pods.go:61] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:43.850627    3700 system_pods.go:61] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:43.850629    3700 system_pods.go:61] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:43.850632    3700 system_pods.go:61] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:43.850635    3700 system_pods.go:61] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:43.850637    3700 system_pods.go:61] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:43.850641    3700 system_pods.go:74] duration metric: took 196.198757ms to wait for pod list to return data ...
	I0816 05:44:43.850647    3700 default_sa.go:34] waiting for default service account to be created ...
	I0816 05:44:44.037355    3700 request.go:632] Waited for 186.643021ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037480    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.037499    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.037514    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.040844    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.041066    3700 default_sa.go:45] found service account: "default"
	I0816 05:44:44.041079    3700 default_sa.go:55] duration metric: took 190.431399ms for default service account to be created ...
	I0816 05:44:44.041086    3700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 05:44:44.237668    3700 request.go:632] Waited for 196.520766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237780    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237791    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.237803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.237812    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.243185    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:44.248662    3700 system_pods.go:86] 26 kube-system pods found
	I0816 05:44:44.248675    3700 system_pods.go:89] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248680    3700 system_pods.go:89] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248688    3700 system_pods.go:89] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:44.248691    3700 system_pods.go:89] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:44.248694    3700 system_pods.go:89] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:44.248697    3700 system_pods.go:89] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:44.248701    3700 system_pods.go:89] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:44.248705    3700 system_pods.go:89] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:44.248708    3700 system_pods.go:89] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:44.248711    3700 system_pods.go:89] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:44.248714    3700 system_pods.go:89] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:44.248717    3700 system_pods.go:89] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:44.248720    3700 system_pods.go:89] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:44.248723    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:44.248726    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:44.248728    3700 system_pods.go:89] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:44.248731    3700 system_pods.go:89] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:44.248734    3700 system_pods.go:89] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:44.248738    3700 system_pods.go:89] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:44.248742    3700 system_pods.go:89] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:44.248745    3700 system_pods.go:89] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:44.248748    3700 system_pods.go:89] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:44.248751    3700 system_pods.go:89] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:44.248756    3700 system_pods.go:89] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:44.248759    3700 system_pods.go:89] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:44.248761    3700 system_pods.go:89] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:44.248766    3700 system_pods.go:126] duration metric: took 207.679371ms to wait for k8s-apps to be running ...
	I0816 05:44:44.248773    3700 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 05:44:44.248823    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:44:44.259672    3700 system_svc.go:56] duration metric: took 10.896688ms WaitForService to wait for kubelet
	I0816 05:44:44.259685    3700 kubeadm.go:582] duration metric: took 38.550213651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:44:44.259697    3700 node_conditions.go:102] verifying NodePressure condition ...
	I0816 05:44:44.438728    3700 request.go:632] Waited for 178.976716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438882    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.438928    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.438938    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.442848    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.443702    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443718    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443727    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443730    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443734    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443737    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443741    3700 node_conditions.go:105] duration metric: took 184.043638ms to run NodePressure ...
	I0816 05:44:44.443749    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:44:44.443767    3700 start.go:255] writing updated cluster config ...
	I0816 05:44:44.469062    3700 out.go:201] 
	I0816 05:44:44.489551    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:44.489670    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.512183    3700 out.go:177] * Starting "ha-073000-m04" worker node in "ha-073000" cluster
	I0816 05:44:44.554442    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:44:44.554478    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:44:44.554690    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:44:44.554709    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:44:44.554824    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.555855    3700 start.go:360] acquireMachinesLock for ha-073000-m04: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:44.555978    3700 start.go:364] duration metric: took 98.145µs to acquireMachinesLock for "ha-073000-m04"
	I0816 05:44:44.556004    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:44.556011    3700 fix.go:54] fixHost starting: m04
	I0816 05:44:44.556446    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:44.556472    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:44.565770    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52105
	I0816 05:44:44.566121    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:44.566496    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:44.566517    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:44.566729    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:44.566845    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.566927    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:44:44.567001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.567096    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3643
	I0816 05:44:44.568001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.568039    3700 fix.go:112] recreateIfNeeded on ha-073000-m04: state=Stopped err=<nil>
	I0816 05:44:44.568049    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	W0816 05:44:44.568121    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:44.606366    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m04" ...
	I0816 05:44:44.663139    3700 main.go:141] libmachine: (ha-073000-m04) Calling .Start
	I0816 05:44:44.663315    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.663399    3700 main.go:141] libmachine: (ha-073000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid
	I0816 05:44:44.664399    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.664426    3700 main.go:141] libmachine: (ha-073000-m04) DBG | pid 3643 is in state "Stopped"
	I0816 05:44:44.664490    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid...
	I0816 05:44:44.664647    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Using UUID f2db23bc-c2a0-4ea2-9158-e93c928b5416
	I0816 05:44:44.689456    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Generated MAC f2:da:75:16:53:b7
	I0816 05:44:44.689481    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:44:44.689607    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689641    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689691    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f2db23bc-c2a0-4ea2-9158-e93c928b5416", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:44:44.689730    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f2db23bc-c2a0-4ea2-9158-e93c928b5416 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:44:44.689749    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:44:44.691094    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Pid is 3728
	I0816 05:44:44.691611    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Attempt 0
	I0816 05:44:44.691627    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.691728    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3728
	I0816 05:44:44.693940    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Searching for f2:da:75:16:53:b7 in /var/db/dhcpd_leases ...
	I0816 05:44:44.694051    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:44:44.694092    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 05:44:44.694127    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:44:44.694155    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:44:44.694170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetConfigRaw
	I0816 05:44:44.694173    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:44:44.694200    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found match: f2:da:75:16:53:b7
	I0816 05:44:44.694236    3700 main.go:141] libmachine: (ha-073000-m04) DBG | IP: 192.169.0.8
	I0816 05:44:44.695065    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:44.695278    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.695692    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:44:44.695703    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.695833    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:44.695931    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:44.696050    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696166    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696254    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:44.696382    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:44.696563    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:44.696574    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:44:44.699477    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:44:44.708676    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:44:44.709681    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:44.709706    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:44.709738    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:44.709752    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.096454    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:44:45.096470    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:44:45.211316    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:45.211336    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:45.211351    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:45.211358    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.212223    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:44:45.212237    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:44:50.828020    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:44:50.828090    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:44:50.828101    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:44:50.851950    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:44:55.760625    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:44:55.760639    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760793    3700 buildroot.go:166] provisioning hostname "ha-073000-m04"
	I0816 05:44:55.760805    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760899    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.760990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.761085    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761159    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761232    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.761366    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.761519    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.761528    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m04 && echo "ha-073000-m04" | sudo tee /etc/hostname
	I0816 05:44:55.833156    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m04
	
	I0816 05:44:55.833170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.833308    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.833414    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833503    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833603    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.833738    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.833899    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.833910    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:44:55.900349    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:44:55.900365    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:44:55.900383    3700 buildroot.go:174] setting up certificates
	I0816 05:44:55.900391    3700 provision.go:84] configureAuth start
	I0816 05:44:55.900398    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.900534    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:55.900638    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.900717    3700 provision.go:143] copyHostCerts
	I0816 05:44:55.900744    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900810    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:44:55.900816    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900947    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:44:55.901143    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901190    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:44:55.901195    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901271    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:44:55.901417    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901455    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:44:55.901460    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901535    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:44:55.901685    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m04 san=[127.0.0.1 192.169.0.8 ha-073000-m04 localhost minikube]
	I0816 05:44:56.021206    3700 provision.go:177] copyRemoteCerts
	I0816 05:44:56.021264    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:44:56.021279    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.021423    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.021518    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.021612    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.021689    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:56.060318    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:44:56.060388    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:44:56.079682    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:44:56.079759    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:44:56.100671    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:44:56.100755    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:44:56.120911    3700 provision.go:87] duration metric: took 220.512292ms to configureAuth
	I0816 05:44:56.120927    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:44:56.121094    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:56.121108    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:56.121244    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.121333    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.121413    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121488    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121577    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.121685    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.121810    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.121818    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:44:56.183314    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:44:56.183328    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:44:56.183405    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:44:56.183418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.183543    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.183624    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183720    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183811    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.183942    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.184086    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.184135    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:44:56.259224    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:44:56.259247    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.259375    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.259477    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259561    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259648    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.259767    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.259901    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.259914    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:57.846578    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:57.846593    3700 machine.go:96] duration metric: took 13.151151754s to provisionDockerMachine
	I0816 05:44:57.846601    3700 start.go:293] postStartSetup for "ha-073000-m04" (driver="hyperkit")
	I0816 05:44:57.846608    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:57.846619    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.846827    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:57.846841    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.846963    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.847057    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.847190    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.847325    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.890251    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:57.893714    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:57.893725    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:57.893828    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:57.894005    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:57.894011    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:57.894210    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:57.903672    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:57.936540    3700 start.go:296] duration metric: took 89.932708ms for postStartSetup
	I0816 05:44:57.936562    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.936732    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:57.936743    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.936825    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.936908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.936990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.937072    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.974376    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:57.974431    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:58.026259    3700 fix.go:56] duration metric: took 13.470511319s for fixHost
	I0816 05:44:58.026289    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.026437    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.026567    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026661    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026739    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.026870    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:58.027046    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:58.027055    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:58.089267    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812298.037026032
	
	I0816 05:44:58.089280    3700 fix.go:216] guest clock: 1723812298.037026032
	I0816 05:44:58.089285    3700 fix.go:229] Guest: 2024-08-16 05:44:58.037026032 -0700 PDT Remote: 2024-08-16 05:44:58.026278 -0700 PDT m=+113.498555850 (delta=10.748032ms)
	I0816 05:44:58.089296    3700 fix.go:200] guest clock delta is within tolerance: 10.748032ms
	I0816 05:44:58.089300    3700 start.go:83] releasing machines lock for "ha-073000-m04", held for 13.533577972s
	I0816 05:44:58.089315    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.089444    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:58.113019    3700 out.go:177] * Found network options:
	I0816 05:44:58.133803    3700 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 05:44:58.154869    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.154894    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.154908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155540    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155619    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:58.155647    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	W0816 05:44:58.155674    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.155690    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.155757    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:58.155778    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.155796    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155925    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155946    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156056    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156076    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156184    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156198    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:58.156285    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	W0816 05:44:58.193631    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:58.193701    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:58.236070    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:58.236085    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.236153    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.252488    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:58.262662    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:58.272809    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.272876    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:58.283088    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.293199    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:58.302692    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.312080    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:58.321436    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:58.330649    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:58.339785    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:58.349176    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:58.357543    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:58.365884    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.462788    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:58.483641    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.483717    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:58.502138    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.514733    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:58.534512    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.547599    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.558372    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:58.578053    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.588770    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.604147    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:58.607001    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:58.614131    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:58.627780    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:58.724561    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:58.838116    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.838140    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:58.852167    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.944841    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:45:59.852967    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.909307795s)
	I0816 05:45:59.853035    3700 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 05:45:59.887051    3700 out.go:201] 
	W0816 05:45:59.908317    3700 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 12:44:56 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.477961385Z" level=info msg="Starting up"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.478651123Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.479149818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=492
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.497251014Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512736016Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512786960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512832906Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512843449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512990846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513025418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513142091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513176878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513189848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513197982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513328837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513514337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515123592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515162448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515278467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515313029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515424326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515511733Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517455314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517544772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517585141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517601510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517612297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517713222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517933474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518033958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518069471Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518088650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518101306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518111033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518119014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518128230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518155729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518197753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518209146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518217247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518232727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518242479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518257521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518270826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518280074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518288937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518296642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518305847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518314748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518324203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518386404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518396238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518404404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518414105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518428969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518437387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518445132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518491204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518506443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518514647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518522672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518529245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518537689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518544653Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518899090Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518957259Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519012111Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519026933Z" level=info msg="containerd successfully booted in 0.022691s"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.498621326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.511032578Z" level=info msg="Loading containers: start."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.643404815Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.708639630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756823239Z" level=warning msg="error locating sandbox id 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd: sandbox 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd not found"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756925263Z" level=info msg="Loading containers: done."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.763915655Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.764081581Z" level=info msg="Daemon has completed initialization"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.785909245Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 12:44:57 ha-073000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.786078565Z" level=info msg="API listen on [::]:2376"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.921954679Z" level=info msg="Processing signal 'terminated'"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923118966Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923233559Z" level=info msg="Daemon shutdown complete"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923326494Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923341810Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 12:44:58 ha-073000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 dockerd[1163]: time="2024-08-16T12:44:59.962962742Z" level=info msg="Starting up"
	Aug 16 12:45:59 ha-073000-m04 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 05:45:59.908450    3700 out.go:270] * 
	W0816 05:45:59.909700    3700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:45:59.951019    3700 out.go:201] 
	
	
	==> Docker <==
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.287053661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.299959837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300038537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300052775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.299956506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300173351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300182269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300373518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.302304016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263043998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263098724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263109067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263358182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255557325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255603642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255615508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255680038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250561832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250773743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250784609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.251063565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.467903824Z" level=info msg="shim disconnected" id=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 namespace=moby
	Aug 16 12:45:12 ha-073000 dockerd[1123]: time="2024-08-16T12:45:12.468049001Z" level=info msg="ignoring event" container=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.468960050Z" level=warning msg="cleaning up after shim disconnected" id=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 namespace=moby
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.469006000Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	53efad29a380d       cbb01a7bd410d       49 seconds ago       Running             coredns                   2                   97e99c8f985cf       coredns-6f6b679f8f-2fdpw
	dde34f0f1905b       cbb01a7bd410d       52 seconds ago       Running             coredns                   2                   cbedc350d9a72       coredns-6f6b679f8f-vf22s
	d53d4035a10b6       12968670680f4       About a minute ago   Running             kindnet-cni               2                   64a7fad2cec73       kindnet-6w49d
	ca60e2c666f71       ad83b2ca7b09e       About a minute ago   Running             kube-proxy                2                   b53d07e491338       kube-proxy-6nsmz
	8da86bdda1be0       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   4d9d98ef92415       storage-provisioner
	0d6c46a2a7e36       8c811b4aec35f       About a minute ago   Running             busybox                   2                   5a113806a9083       busybox-7dff88458-tbh6p
	d3bc52584d24f       045733566833c       About a minute ago   Running             kube-controller-manager   4                   ae78baa701d35       kube-controller-manager-ha-073000
	5261283416a26       38af8ddebf499       2 minutes ago        Running             kube-vip                  1                   8a3e2cb139422       kube-vip-ha-073000
	4132415e113f3       604f5db92eaa8       2 minutes ago        Running             kube-apiserver            2                   5b0df036eddf2       kube-apiserver-ha-073000
	35d4653c8cb24       1766f54c897f0       2 minutes ago        Running             kube-scheduler            2                   439518e74411c       kube-scheduler-ha-073000
	746a25d99f7e9       2e96e5913fc06       2 minutes ago        Running             etcd                      2                   01df760da9958       etcd-ha-073000
	6e9db99cb249f       045733566833c       2 minutes ago        Exited              kube-controller-manager   3                   ae78baa701d35       kube-controller-manager-ha-073000
	45a286f9bcbe0       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   917fa53aa567f       busybox-7dff88458-tbh6p
	4cea51d49ca8a       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   da30f2a6f620a       coredns-6f6b679f8f-2fdpw
	ac45a09e68e6e       12968670680f4       5 minutes ago        Exited              kindnet-cni               1                   b7cba0c6730d7       kindnet-6w49d
	6bd9db004e0f2       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   9723d60c28159       coredns-6f6b679f8f-vf22s
	9ac6acc1a0063       ad83b2ca7b09e       5 minutes ago        Exited              kube-proxy                1                   b73943b66f38c       kube-proxy-6nsmz
	817998dc223bd       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   3aee62c916259       kube-vip-ha-073000
	f7dc3b77e3e36       1766f54c897f0       5 minutes ago        Exited              kube-scheduler            1                   a39f7babb7d55       kube-scheduler-ha-073000
	bbea06dccbfca       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   a744d07ec14bd       etcd-ha-073000
	2794b950d2a1a       604f5db92eaa8       5 minutes ago        Exited              kube-apiserver            1                   1b8fe978c9574       kube-apiserver-ha-073000
	
	
	==> coredns [4cea51d49ca8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48498 - 38677 "HINFO IN 4258714537102711440.5140164315290019176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005829685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1808017765]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30003ms):
	Trace[1808017765]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.953)
	Trace[1808017765]: [30.003041878s] [30.003041878s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1742264180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.953) (total time: 30001ms):
	Trace[1742264180]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.954)
	Trace[1742264180]: [30.001952334s] [30.001952334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1076876587]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30003ms):
	Trace[1076876587]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.953)
	Trace[1076876587]: [30.003826704s] [30.003826704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [53efad29a380] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39872 - 38910 "HINFO IN 8344980917306972801.1301155251568300364. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073019455s
	
	
	==> coredns [6bd9db004e0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34749 - 50535 "HINFO IN 6826521007957410060.7380457420194179284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005638083s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2071701981]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30002ms):
	Trace[2071701981]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.953)
	Trace[2071701981]: [30.002664177s] [30.002664177s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[81888879]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.953) (total time: 30001ms):
	Trace[81888879]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.955)
	Trace[81888879]: [30.001793099s] [30.001793099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1240220066]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30002ms):
	Trace[1240220066]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.953)
	Trace[1240220066]: [30.002668968s] [30.002668968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dde34f0f1905] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40084 - 46500 "HINFO IN 4264016345420347209.7126907159756313777. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02255325s
	
	
	==> describe nodes <==
	Name:               ha-073000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T05_34_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:34:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:45:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:44:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-073000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e52eb9451e244b8aa696383c6e23553e
	  System UUID:                449f4e9a-0000-0000-9271-363ec4bdb253
	  Boot ID:                    eacb4432-039c-4561-b63c-a22e6109d42f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tbh6p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 coredns-6f6b679f8f-2fdpw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-6f6b679f8f-vf22s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-ha-073000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-6w49d                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-073000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-073000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6nsmz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-073000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-073000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m3s                   kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-073000 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           8m58s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           6m52s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  Starting                 5m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                   node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           91s                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	
	
	Name:               ha-073000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_35_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:45:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-073000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 74aa745bc3424232a48f3120c1bc5001
	  System UUID:                2ecb470f-0000-0000-9281-b78e2fd82941
	  Boot ID:                    c5f8c789-3b3b-40a8-beef-6bd94cba0d06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mq4rd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 etcd-ha-073000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-vjtpn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-073000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-073000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-c27jt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-073000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-073000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   Starting                 6m55s                  kube-proxy       
	  Normal   Starting                 5m23s                  kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           8m58s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   Starting                 7m                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 6m59s                  kubelet          Node ha-073000-m02 has been rebooted, boot id: cd5a6628-e2f5-4c6f-91f1-5ff24dad7ec8
	  Normal   NodeHasSufficientMemory  6m59s (x2 over 7m)     kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m59s (x2 over 7m)     kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m59s (x2 over 7m)     kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m52s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m37s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m25s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   Starting                 116s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           103s                   node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	
	
	Name:               ha-073000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_37_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:42:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-073000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad7455d2901a48ca849fbe74152548be
	  System UUID:                f2db4ea2-0000-0000-9158-e93c928b5416
	  Boot ID:                    c501d95c-4cf4-48d1-a140-e26142bbc85e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8cgvv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-67bkr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m9s
	  kube-system                 kube-proxy-wcgdv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m1s                   kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   NodeHasSufficientPID     8m9s (x2 over 8m9s)    kubelet          Node ha-073000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m9s (x2 over 8m9s)    kubelet          Node ha-073000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m9s (x2 over 8m9s)    kubelet          Node ha-073000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  8m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8m7s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           8m5s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           8m4s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeReady                7m46s                  kubelet          Node ha-073000-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m53s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           5m26s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           5m5s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeNotReady             4m46s                  node-controller  Node ha-073000-m04 status is now: NodeNotReady
	  Normal   Starting                 3m37s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m37s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 3m37s (x2 over 3m37s)  kubelet          Node ha-073000-m04 has been rebooted, boot id: c501d95c-4cf4-48d1-a140-e26142bbc85e
	  Normal   NodeHasSufficientMemory  3m37s (x3 over 3m37s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m37s (x3 over 3m37s)  kubelet          Node ha-073000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m37s (x3 over 3m37s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             3m37s                  kubelet          Node ha-073000-m04 status is now: NodeNotReady
	  Normal   NodeReady                3m37s                  kubelet          Node ha-073000-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           92s                    node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeNotReady             64s                    node-controller  Node ha-073000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.035575] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007992] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.682346] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.707235] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.270759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.716408] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.106851] systemd-fstab-generator[507]: Ignoring "noauto" option for root device
	[  +2.000923] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.251980] systemd-fstab-generator[1089]: Ignoring "noauto" option for root device
	[  +0.119628] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.114647] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +2.500095] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.050270] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.052123] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.110888] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.123146] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.430382] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +6.822975] kauditd_printk_skb: 110 callbacks suppressed
	[Aug16 12:44] kauditd_printk_skb: 40 callbacks suppressed
	[ +27.030866] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.057508] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [746a25d99f7e] <==
	{"level":"info","ts":"2024-08-16T12:44:13.063659Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:44:13.376775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T12:44:13.376821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T12:44:13.376910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-16T12:44:13.377016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2787] sent MsgPreVote request to 330f9299269ea03a at term 3"}
	{"level":"info","ts":"2024-08-16T12:44:13.378239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from 330f9299269ea03a at term 3"}
	{"level":"info","ts":"2024-08-16T12:44:13.378332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-08-16T12:44:13.378341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.378346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.378352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2787] sent MsgVote request to 330f9299269ea03a at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.387739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from 330f9299269ea03a at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.387848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-08-16T12:44:13.387941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.388012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-08-16T12:44:13.401221Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-073000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T12:44:13.401486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:44:13.402035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:44:13.402191Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T12:44:13.402239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T12:44:13.402764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T12:44:13.403430Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-08-16T12:44:13.403860Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T12:44:13.404492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-16T12:44:14.432877Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"330f9299269ea03a","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:44:14.432945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"330f9299269ea03a","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	
	
	==> etcd [bbea06dccbfc] <==
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630077Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:51.632815Z","time spent":"4.997260741s","remote":"127.0.0.1:36324","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.901369422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T12:42:56.630156Z","caller":"traceutil/trace.go:171","msg":"trace[1178344434] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"5.901383392s","start":"2024-08-16T12:42:50.728768Z","end":"2024-08-16T12:42:56.630152Z","steps":["trace[1178344434] 'agreement among raft nodes before linearized reading'  (duration: 5.901369776s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:42:56.630168Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:50.728645Z","time spent":"5.901518585s","remote":"127.0.0.1:36182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.774725657s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T12:42:56.630221Z","caller":"traceutil/trace.go:171","msg":"trace[1675355472] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.774736143s","start":"2024-08-16T12:42:54.855482Z","end":"2024-08-16T12:42:56.630218Z","steps":["trace[1675355472] 'agreement among raft nodes before linearized reading'  (duration: 1.774725523s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:42:56.630230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:54.855466Z","time spent":"1.774762152s","remote":"127.0.0.1:36326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.677616Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:42:56.677932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T12:42:56.680885Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T12:42:56.681006Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681017Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681029Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681092Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681114Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681156Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681199Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.684250Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T12:42:56.684317Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T12:42:56.684324Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-073000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 12:46:02 up 2 min,  0 users,  load average: 0.27, 0.13, 0.05
	Linux ha-073000 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ac45a09e68e6] <==
	I0816 12:42:18.785854       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:18.785985       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0816 12:42:18.786013       1 main.go:322] Node ha-073000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:42:18.786121       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:18.786181       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:28.779187       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:28.779366       1 main.go:299] handling current node
	I0816 12:42:28.779397       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:28.779418       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:28.779624       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0816 12:42:28.779796       1 main.go:322] Node ha-073000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:42:28.779915       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:28.780031       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:38.780023       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:38.780075       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:38.780197       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:38.780225       1 main.go:299] handling current node
	I0816 12:42:38.780252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:38.780276       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:48.780124       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:48.780274       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:48.780772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:48.780891       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:48.781100       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:48.781255       1 main.go:299] handling current node
	
	
	==> kindnet [d53d4035a10b] <==
	I0816 12:45:17.503878       1 main.go:299] handling current node
	I0816 12:45:27.503638       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:45:27.503917       1 main.go:299] handling current node
	I0816 12:45:27.503973       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:45:27.504000       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:45:27.504377       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:45:27.504470       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:45:37.498181       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:45:37.498212       1 main.go:299] handling current node
	I0816 12:45:37.498227       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:45:37.498235       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:45:37.498478       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:45:37.498553       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:45:47.494238       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:45:47.494314       1 main.go:299] handling current node
	I0816 12:45:47.494325       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:45:47.494330       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:45:47.494401       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:45:47.494408       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:45:57.500574       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:45:57.500984       1 main.go:299] handling current node
	I0816 12:45:57.501017       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:45:57.501024       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:45:57.503402       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:45:57.503488       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2794b950d2a1] <==
	W0816 12:42:56.654442       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654464       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654486       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654519       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.654574       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	W0816 12:42:56.654759       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654785       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654809       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654833       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654856       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654881       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654910       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.655805       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0816 12:42:56.655997       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0816 12:42:56.656363       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656471       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656723       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656752       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656773       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656793       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.656860       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0816 12:42:56.659986       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.660258       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.660291       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0816 12:42:56.693611       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [4132415e113f] <==
	I0816 12:44:14.239627       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0816 12:44:14.261469       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0816 12:44:14.333789       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 12:44:14.337352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 12:44:14.337399       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 12:44:14.338838       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 12:44:14.338867       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 12:44:14.339105       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 12:44:14.339594       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 12:44:14.339757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 12:44:14.346222       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 12:44:14.351290       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:44:14.351356       1 policy_source.go:224] refreshing policies
	I0816 12:44:14.362299       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 12:44:14.362365       1 aggregator.go:171] initial CRD sync complete...
	I0816 12:44:14.362371       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 12:44:14.362420       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 12:44:14.362476       1 cache.go:39] Caches are synced for autoregister controller
	W0816 12:44:14.363142       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0816 12:44:14.364917       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:44:14.371348       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0816 12:44:14.374126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0816 12:44:14.421402       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 12:44:15.239815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 12:44:15.584175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [6e9db99cb249] <==
	I0816 12:43:54.709568       1 serving.go:386] Generated self-signed cert in-memory
	I0816 12:43:55.294404       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 12:43:55.294436       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:43:55.301712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 12:43:55.302087       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 12:43:55.302339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:43:55.302387       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 12:44:15.306064       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d3bc52584d24] <==
	I0816 12:44:43.508436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.674164ms"
	I0816 12:44:43.508506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.431µs"
	E0816 12:44:50.788298       1 gc_controller.go:151] "Failed to get node" err="node \"ha-073000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-073000-m03"
	E0816 12:44:50.788335       1 gc_controller.go:151] "Failed to get node" err="node \"ha-073000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-073000-m03"
	E0816 12:44:50.788344       1 gc_controller.go:151] "Failed to get node" err="node \"ha-073000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-073000-m03"
	E0816 12:44:50.788348       1 gc_controller.go:151] "Failed to get node" err="node \"ha-073000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-073000-m03"
	E0816 12:44:50.788352       1 gc_controller.go:151] "Failed to get node" err="node \"ha-073000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-073000-m03"
	I0816 12:44:56.217497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="42.318µs"
	I0816 12:44:58.548051       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:44:58.558745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:44:58.565073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.356108ms"
	I0816 12:44:58.565448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="314.763µs"
	I0816 12:44:59.220778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="38.326µs"
	I0816 12:45:00.850222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:45:03.600907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:45:09.822075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="41.384µs"
	I0816 12:45:09.840078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="8.760376ms"
	I0816 12:45:09.840368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="59.519µs"
	I0816 12:45:09.862969       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-llhg4 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-llhg4\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:45:09.863081       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"08a5063a-4467-49d7-ad30-6cd0fa5391c1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-llhg4 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-llhg4": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:45:12.855283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.038µs"
	I0816 12:45:12.884367       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-llhg4 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-llhg4\": the object has been modified; please apply your changes to the latest version and try again"
	I0816 12:45:12.885418       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"08a5063a-4467-49d7-ad30-6cd0fa5391c1", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-llhg4 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-llhg4": the object has been modified; please apply your changes to the latest version and try again
	I0816 12:45:12.903995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="36.058267ms"
	I0816 12:45:12.904461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="165.928µs"
	
	
	==> kube-proxy [9ac6acc1a006] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:40:58.119214       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:40:58.140964       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 12:40:58.141059       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:40:58.183215       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:40:58.183283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:40:58.183302       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:40:58.187885       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:40:58.189155       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:40:58.189310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:40:58.194403       1 config.go:197] "Starting service config controller"
	I0816 12:40:58.194925       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:40:58.195375       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:40:58.195405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:40:58.198342       1 config.go:326] "Starting node config controller"
	I0816 12:40:58.198371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:40:58.295736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:40:58.295786       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:40:58.298813       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ca60e2c666f7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:44:42.606451       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:44:42.619729       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 12:44:42.619898       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:44:42.653091       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:44:42.653110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:44:42.653134       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:44:42.655923       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:44:42.656347       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:44:42.656397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:44:42.658856       1 config.go:197] "Starting service config controller"
	I0816 12:44:42.659229       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:44:42.659453       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:44:42.659543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:44:42.660619       1 config.go:326] "Starting node config controller"
	I0816 12:44:42.660792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:44:42.759577       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:44:42.759915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:44:42.761512       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35d4653c8cb2] <==
	I0816 12:43:55.210137       1 serving.go:386] Generated self-signed cert in-memory
	W0816 12:44:05.748449       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 12:44:05.748496       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 12:44:05.748502       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 12:44:14.279151       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 12:44:14.279189       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:44:14.280821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 12:44:14.284117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 12:44:14.284473       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:44:14.287015       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:44:14.385344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f7dc3b77e3e3] <==
	I0816 12:40:13.671703       1 serving.go:386] Generated self-signed cert in-memory
	W0816 12:40:24.123596       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 12:40:24.123639       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 12:40:24.123645       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 12:40:33.202244       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 12:40:33.203684       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:40:33.206420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 12:40:33.206746       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 12:40:33.207443       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:40:33.207300       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:40:33.311876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 12:42:32.624416       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8cgvv\": pod busybox-7dff88458-8cgvv is already assigned to node \"ha-073000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8cgvv" node="ha-073000-m04"
	E0816 12:42:32.627510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5d52700b-2644-418d-ab40-6fc48f247d6f(default/busybox-7dff88458-8cgvv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8cgvv"
	E0816 12:42:32.627623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8cgvv\": pod busybox-7dff88458-8cgvv is already assigned to node \"ha-073000-m04\"" pod="default/busybox-7dff88458-8cgvv"
	I0816 12:42:32.627739       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8cgvv" node="ha-073000-m04"
	E0816 12:42:56.710485       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 12:44:47 ha-073000 kubelet[1540]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:44:47 ha-073000 kubelet[1540]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:44:47 ha-073000 kubelet[1540]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:44:56 ha-073000 kubelet[1540]: I0816 12:44:56.203055    1540 scope.go:117] "RemoveContainer" containerID="6bd9db004e0f209d037d6f3cf7918a3a3ec484e06254eb85c5b042bd2df846a9"
	Aug 16 12:44:56 ha-073000 kubelet[1540]: E0816 12:44:56.203152    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-6f6b679f8f-vf22s_kube-system(b19e457d-d8ad-4a2f-a26d-2c4cce1dd187)\"" pod="kube-system/coredns-6f6b679f8f-vf22s" podUID="b19e457d-d8ad-4a2f-a26d-2c4cce1dd187"
	Aug 16 12:44:59 ha-073000 kubelet[1540]: I0816 12:44:59.203363    1540 scope.go:117] "RemoveContainer" containerID="4cea51d49ca8ab3cce58d324799883164b7edd66abf66c133e42c9bc20a58bf2"
	Aug 16 12:44:59 ha-073000 kubelet[1540]: E0816 12:44:59.203478    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-6f6b679f8f-2fdpw_kube-system(5eed297b-a1f8-4042-918d-abbd8cd0c025)\"" pod="kube-system/coredns-6f6b679f8f-2fdpw" podUID="5eed297b-a1f8-4042-918d-abbd8cd0c025"
	Aug 16 12:45:09 ha-073000 kubelet[1540]: I0816 12:45:09.203262    1540 scope.go:117] "RemoveContainer" containerID="6bd9db004e0f209d037d6f3cf7918a3a3ec484e06254eb85c5b042bd2df846a9"
	Aug 16 12:45:12 ha-073000 kubelet[1540]: I0816 12:45:12.202884    1540 scope.go:117] "RemoveContainer" containerID="4cea51d49ca8ab3cce58d324799883164b7edd66abf66c133e42c9bc20a58bf2"
	Aug 16 12:45:12 ha-073000 kubelet[1540]: I0816 12:45:12.862449    1540 scope.go:117] "RemoveContainer" containerID="42c3e8541b732948000e387df491c8ea1f9d42ce953deeea89da66b1198967f1"
	Aug 16 12:45:12 ha-073000 kubelet[1540]: I0816 12:45:12.862725    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:12 ha-073000 kubelet[1540]: E0816 12:45:12.862845    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:24 ha-073000 kubelet[1540]: I0816 12:45:24.202903    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:24 ha-073000 kubelet[1540]: E0816 12:45:24.203175    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:39 ha-073000 kubelet[1540]: I0816 12:45:39.202752    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:39 ha-073000 kubelet[1540]: E0816 12:45:39.202899    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:47 ha-073000 kubelet[1540]: E0816 12:45:47.224411    1540 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:45:47 ha-073000 kubelet[1540]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:45:50 ha-073000 kubelet[1540]: I0816 12:45:50.203594    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:50 ha-073000 kubelet[1540]: E0816 12:45:50.204150    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:46:01 ha-073000 kubelet[1540]: I0816 12:46:01.203069    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:46:01 ha-073000 kubelet[1540]: E0816 12:46:01.203181    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-073000 -n ha-073000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-073000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (179.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-073000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-073000 --control-plane -v=7 --alsologtostderr: (1m10.938281907s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr: exit status 2 (461.389456ms)

                                                
                                                
-- stdout --
	ha-073000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-073000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-073000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	
	ha-073000-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:47:15.268852    3769 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:47:15.269137    3769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:47:15.269143    3769 out.go:358] Setting ErrFile to fd 2...
	I0816 05:47:15.269146    3769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:47:15.269335    3769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:47:15.269529    3769 out.go:352] Setting JSON to false
	I0816 05:47:15.269557    3769 mustload.go:65] Loading cluster: ha-073000
	I0816 05:47:15.269600    3769 notify.go:220] Checking for updates...
	I0816 05:47:15.269883    3769 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:47:15.269899    3769 status.go:255] checking status of ha-073000 ...
	I0816 05:47:15.270291    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.270345    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.279431    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52208
	I0816 05:47:15.279891    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.280304    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.280321    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.280541    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.280655    3769 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:47:15.280749    3769 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:47:15.280823    3769 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:47:15.281778    3769 status.go:330] ha-073000 host status = "Running" (err=<nil>)
	I0816 05:47:15.281802    3769 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:47:15.282032    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.282052    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.290517    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52210
	I0816 05:47:15.290821    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.291135    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.291144    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.291369    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.291479    3769 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:47:15.291566    3769 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:47:15.291828    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.291859    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.303986    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52212
	I0816 05:47:15.304352    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.304684    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.304695    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.304941    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.305050    3769 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:47:15.305198    3769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:47:15.305223    3769 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:47:15.305311    3769 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:47:15.305398    3769 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:47:15.305492    3769 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:47:15.305581    3769 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:47:15.337254    3769 ssh_runner.go:195] Run: systemctl --version
	I0816 05:47:15.341461    3769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:47:15.353671    3769 kubeconfig.go:125] found "ha-073000" server: "https://192.169.0.254:8443"
	I0816 05:47:15.353697    3769 api_server.go:166] Checking apiserver status ...
	I0816 05:47:15.353740    3769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:47:15.367891    3769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2116/cgroup
	W0816 05:47:15.382185    3769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:47:15.382270    3769 ssh_runner.go:195] Run: ls
	I0816 05:47:15.386027    3769 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 05:47:15.390218    3769 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 05:47:15.390232    3769 status.go:422] ha-073000 apiserver status = Running (err=<nil>)
	I0816 05:47:15.390241    3769 status.go:257] ha-073000 status: &{Name:ha-073000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:47:15.390252    3769 status.go:255] checking status of ha-073000-m02 ...
	I0816 05:47:15.390527    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.390547    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.399242    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52216
	I0816 05:47:15.399706    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.400048    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.400062    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.400267    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.400376    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:47:15.400459    3769 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:47:15.400578    3769 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3719
	I0816 05:47:15.401528    3769 status.go:330] ha-073000-m02 host status = "Running" (err=<nil>)
	I0816 05:47:15.401539    3769 host.go:66] Checking if "ha-073000-m02" exists ...
	I0816 05:47:15.401794    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.401815    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.410488    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52218
	I0816 05:47:15.410905    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.411238    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.411251    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.411483    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.411598    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:47:15.411674    3769 host.go:66] Checking if "ha-073000-m02" exists ...
	I0816 05:47:15.411974    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.411998    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.420711    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52220
	I0816 05:47:15.421065    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.421428    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.421446    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.421677    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.421784    3769 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:47:15.421922    3769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:47:15.421934    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:47:15.422002    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:47:15.422076    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:47:15.422165    3769 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:47:15.422240    3769 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:47:15.452070    3769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:47:15.464409    3769 kubeconfig.go:125] found "ha-073000" server: "https://192.169.0.254:8443"
	I0816 05:47:15.464425    3769 api_server.go:166] Checking apiserver status ...
	I0816 05:47:15.464468    3769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:47:15.476794    3769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2142/cgroup
	W0816 05:47:15.484119    3769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:47:15.484163    3769 ssh_runner.go:195] Run: ls
	I0816 05:47:15.487327    3769 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 05:47:15.490399    3769 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 05:47:15.490410    3769 status.go:422] ha-073000-m02 apiserver status = Running (err=<nil>)
	I0816 05:47:15.490419    3769 status.go:257] ha-073000-m02 status: &{Name:ha-073000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:47:15.490429    3769 status.go:255] checking status of ha-073000-m04 ...
	I0816 05:47:15.490714    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.490737    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.499399    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52224
	I0816 05:47:15.499751    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.500130    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.500145    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.500348    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.500446    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:47:15.500569    3769 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:47:15.500668    3769 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3728
	I0816 05:47:15.501638    3769 status.go:330] ha-073000-m04 host status = "Running" (err=<nil>)
	I0816 05:47:15.501645    3769 host.go:66] Checking if "ha-073000-m04" exists ...
	I0816 05:47:15.501904    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.501930    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.510468    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52226
	I0816 05:47:15.510811    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.511141    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.511150    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.511388    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.511504    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:47:15.511589    3769 host.go:66] Checking if "ha-073000-m04" exists ...
	I0816 05:47:15.511834    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.511857    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.520428    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52228
	I0816 05:47:15.520777    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.521120    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.521133    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.521340    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.521448    3769 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:47:15.521601    3769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:47:15.521613    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:47:15.521700    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:47:15.521792    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:47:15.521875    3769 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:47:15.521949    3769 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:47:15.556987    3769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:47:15.567647    3769 status.go:257] ha-073000-m04 status: &{Name:ha-073000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:47:15.567665    3769 status.go:255] checking status of ha-073000-m05 ...
	I0816 05:47:15.567939    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.567966    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.576891    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52231
	I0816 05:47:15.577255    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.577597    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.577611    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.577846    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.577966    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetState
	I0816 05:47:15.578050    3769 main.go:141] libmachine: (ha-073000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:47:15.578148    3769 main.go:141] libmachine: (ha-073000-m05) DBG | hyperkit pid from json: 3761
	I0816 05:47:15.579123    3769 status.go:330] ha-073000-m05 host status = "Running" (err=<nil>)
	I0816 05:47:15.579134    3769 host.go:66] Checking if "ha-073000-m05" exists ...
	I0816 05:47:15.579387    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.579429    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.587962    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52233
	I0816 05:47:15.588319    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.588638    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.588649    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.588885    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.588985    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetIP
	I0816 05:47:15.589072    3769 host.go:66] Checking if "ha-073000-m05" exists ...
	I0816 05:47:15.589345    3769 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:47:15.589374    3769 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:47:15.597795    3769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52235
	I0816 05:47:15.598129    3769 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:47:15.598460    3769 main.go:141] libmachine: Using API Version  1
	I0816 05:47:15.598473    3769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:47:15.598682    3769 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:47:15.598793    3769 main.go:141] libmachine: (ha-073000-m05) Calling .DriverName
	I0816 05:47:15.598926    3769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:47:15.598937    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetSSHHostname
	I0816 05:47:15.599025    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetSSHPort
	I0816 05:47:15.599112    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetSSHKeyPath
	I0816 05:47:15.599200    3769 main.go:141] libmachine: (ha-073000-m05) Calling .GetSSHUsername
	I0816 05:47:15.599284    3769 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m05/id_rsa Username:docker}
	I0816 05:47:15.635227    3769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:47:15.646265    3769 kubeconfig.go:125] found "ha-073000" server: "https://192.169.0.254:8443"
	I0816 05:47:15.646284    3769 api_server.go:166] Checking apiserver status ...
	I0816 05:47:15.646326    3769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:47:15.658318    3769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1914/cgroup
	W0816 05:47:15.666200    3769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1914/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:47:15.666258    3769 ssh_runner.go:195] Run: ls
	I0816 05:47:15.669519    3769 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 05:47:15.672709    3769 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 05:47:15.672721    3769 status.go:422] ha-073000-m05 apiserver status = Running (err=<nil>)
	I0816 05:47:15.672729    3769 status.go:257] ha-073000-m05 status: &{Name:ha-073000-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:613: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-073000 -n ha-073000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 logs -n 25: (3.651366347s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m04 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m03_ha-073000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-073000 cp testdata/cp-test.txt                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000:/home/docker/cp-test_ha-073000-m04_ha-073000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000 sudo cat                                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m02:/home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m02 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m03:/home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | ha-073000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-073000 ssh -n ha-073000-m03 sudo cat                                                                                      | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | /home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-073000 node stop m02 -v=7                                                                                                 | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:38 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-073000 node start m02 -v=7                                                                                                | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:38 PDT | 16 Aug 24 05:39 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-073000 -v=7                                                                                                       | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-073000 -v=7                                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT | 16 Aug 24 05:39 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-073000 --wait=true -v=7                                                                                                | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:39 PDT | 16 Aug 24 05:42 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-073000                                                                                                            | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT |                     |
	| node    | ha-073000 node delete m03 -v=7                                                                                               | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT | 16 Aug 24 05:42 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-073000 stop -v=7                                                                                                          | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:42 PDT | 16 Aug 24 05:43 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-073000 --wait=true                                                                                                     | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:43 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-073000                                                                                                             | ha-073000 | jenkins | v1.33.1 | 16 Aug 24 05:46 PDT | 16 Aug 24 05:47 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:43:04
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:43:04.564740    3700 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:04.564910    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.564915    3700 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:04.564919    3700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.565081    3700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:43:04.566585    3700 out.go:352] Setting JSON to false
	I0816 05:43:04.588805    3700 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1962,"bootTime":1723810222,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:43:04.588897    3700 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:43:04.613000    3700 out.go:177] * [ha-073000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:43:04.653806    3700 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:43:04.653862    3700 notify.go:220] Checking for updates...
	I0816 05:43:04.696885    3700 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:04.717792    3700 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:43:04.738830    3700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:43:04.759882    3700 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 05:43:04.780629    3700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:43:04.802633    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:04.803322    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.803409    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.812971    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
	I0816 05:43:04.813324    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.813803    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.813822    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.814047    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.814164    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.814416    3700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:43:04.814654    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.814677    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.823004    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
	I0816 05:43:04.823356    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.823668    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.823676    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.823881    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.823986    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.852886    3700 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 05:43:04.873686    3700 start.go:297] selected driver: hyperkit
	I0816 05:43:04.873736    3700 start.go:901] validating driver "hyperkit" against &{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.873963    3700 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:43:04.874147    3700 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.874351    3700 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 05:43:04.884210    3700 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 05:43:04.888002    3700 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.888025    3700 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 05:43:04.890692    3700 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:43:04.890731    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:04.890738    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:04.890804    3700 start.go:340] cluster config:
	{Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:04.890902    3700 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:43:04.933833    3700 out.go:177] * Starting "ha-073000" primary control-plane node in "ha-073000" cluster
	I0816 05:43:04.954485    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:04.954567    3700 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 05:43:04.954587    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:04.954798    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:04.954819    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:04.955011    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:04.955870    3700 start.go:360] acquireMachinesLock for ha-073000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:04.955995    3700 start.go:364] duration metric: took 100.576µs to acquireMachinesLock for "ha-073000"
	I0816 05:43:04.956044    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:04.956062    3700 fix.go:54] fixHost starting: 
	I0816 05:43:04.956492    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.956518    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.965467    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
	I0816 05:43:04.965836    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.966195    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.966210    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.966502    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.966647    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:04.966748    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:43:04.966849    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.966924    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3625
	I0816 05:43:04.967937    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:04.967979    3700 fix.go:112] recreateIfNeeded on ha-073000: state=Stopped err=<nil>
	I0816 05:43:04.968006    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	W0816 05:43:04.968088    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:05.010683    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000" ...
	I0816 05:43:05.031624    3700 main.go:141] libmachine: (ha-073000) Calling .Start
	I0816 05:43:05.031872    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.031897    3700 main.go:141] libmachine: (ha-073000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid
	I0816 05:43:05.033643    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:05.033659    3700 main.go:141] libmachine: (ha-073000) DBG | pid 3625 is in state "Stopped"
	I0816 05:43:05.033683    3700 main.go:141] libmachine: (ha-073000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid...
	I0816 05:43:05.034080    3700 main.go:141] libmachine: (ha-073000) DBG | Using UUID 449fd9a3-1c71-4e9a-9271-363ec4bdb253
	I0816 05:43:05.149249    3700 main.go:141] libmachine: (ha-073000) DBG | Generated MAC 36:31:25:a5:a2:ed
	I0816 05:43:05.149291    3700 main.go:141] libmachine: (ha-073000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:05.149397    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149433    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"449fd9a3-1c71-4e9a-9271-363ec4bdb253", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:05.149473    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "449fd9a3-1c71-4e9a-9271-363ec4bdb253", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:05.149540    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 449fd9a3-1c71-4e9a-9271-363ec4bdb253 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/ha-073000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:05.149556    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:05.150961    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 DEBUG: hyperkit: Pid is 3714
	I0816 05:43:05.151298    3700 main.go:141] libmachine: (ha-073000) DBG | Attempt 0
	I0816 05:43:05.151311    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:05.151435    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:43:05.153225    3700 main.go:141] libmachine: (ha-073000) DBG | Searching for 36:31:25:a5:a2:ed in /var/db/dhcpd_leases ...
	I0816 05:43:05.153302    3700 main.go:141] libmachine: (ha-073000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:05.153320    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:05.153335    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:05.153348    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:05.153395    3700 main.go:141] libmachine: (ha-073000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09a1c}
	I0816 05:43:05.153412    3700 main.go:141] libmachine: (ha-073000) DBG | Found match: 36:31:25:a5:a2:ed
	I0816 05:43:05.153421    3700 main.go:141] libmachine: (ha-073000) Calling .GetConfigRaw
	I0816 05:43:05.153453    3700 main.go:141] libmachine: (ha-073000) DBG | IP: 192.169.0.5
	I0816 05:43:05.154140    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:05.154367    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:05.154767    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:05.154779    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:05.154938    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:05.155074    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:05.155194    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155310    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:05.155408    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:05.155550    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:05.155750    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:05.155759    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:05.159119    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:05.211364    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:05.212077    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.212095    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.212103    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.212109    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.591470    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:05.591483    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:05.706454    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:05.706476    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:05.706490    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:05.706501    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:05.707461    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:05.707472    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:11.286594    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:43:11.286691    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:43:11.286700    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:43:11.310519    3700 main.go:141] libmachine: (ha-073000) DBG | 2024/08/16 05:43:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:43:40.225322    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:40.225337    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225482    3700 buildroot.go:166] provisioning hostname "ha-073000"
	I0816 05:43:40.225493    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.225593    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.225692    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.225793    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225892    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.225986    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.226106    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.226271    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.226298    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000 && echo "ha-073000" | sudo tee /etc/hostname
	I0816 05:43:40.294551    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000
	
	I0816 05:43:40.294568    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.294702    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.294805    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.294917    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.295018    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.295131    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.295293    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.295303    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:40.357437    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:40.357455    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:40.357467    3700 buildroot.go:174] setting up certificates
	I0816 05:43:40.357475    3700 provision.go:84] configureAuth start
	I0816 05:43:40.357482    3700 main.go:141] libmachine: (ha-073000) Calling .GetMachineName
	I0816 05:43:40.357611    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:40.357710    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.357800    3700 provision.go:143] copyHostCerts
	I0816 05:43:40.357831    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.357900    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:40.357908    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:40.358056    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:40.358263    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358303    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:40.358308    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:40.358383    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:40.358527    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358564    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:40.358575    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:40.358655    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:40.358790    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000 san=[127.0.0.1 192.169.0.5 ha-073000 localhost minikube]
	I0816 05:43:40.668742    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:40.668797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:40.668812    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.669020    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.669115    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.669208    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.669298    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:40.705870    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:40.705942    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:40.727099    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:40.727157    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0816 05:43:40.747334    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:40.747393    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 05:43:40.766795    3700 provision.go:87] duration metric: took 409.312981ms to configureAuth
	I0816 05:43:40.766810    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:40.766972    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:40.766985    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:40.767112    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.767214    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.767307    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767377    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.767456    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.767585    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.767712    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.767720    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:40.823994    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:40.824009    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:40.824077    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:40.824089    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.824227    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.824329    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824430    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.824516    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.824679    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.824819    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.824862    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:40.894312    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:40.894335    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:40.894465    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:40.894566    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894651    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:40.894725    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:40.894858    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:40.895012    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:40.895025    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:43:42.619681    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:43:42.619696    3700 machine.go:96] duration metric: took 37.465655472s to provisionDockerMachine
	I0816 05:43:42.619707    3700 start.go:293] postStartSetup for "ha-073000" (driver="hyperkit")
	I0816 05:43:42.619714    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:43:42.619724    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.619902    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:43:42.619926    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.620017    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.620114    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.620221    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.620305    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.656447    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:43:42.659759    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:43:42.659773    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:43:42.659872    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:43:42.660059    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:43:42.660065    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:43:42.660269    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:43:42.667667    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:42.687880    3700 start.go:296] duration metric: took 68.167584ms for postStartSetup
	I0816 05:43:42.687899    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.688070    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:43:42.688083    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.688171    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.688267    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.688367    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.688456    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.722698    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:43:42.722761    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:43:42.776641    3700 fix.go:56] duration metric: took 37.82132494s for fixHost
	I0816 05:43:42.776663    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.776810    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.776931    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777033    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.777125    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.777253    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:42.777390    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0816 05:43:42.777397    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:43:42.836399    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812222.925313054
	
	I0816 05:43:42.836411    3700 fix.go:216] guest clock: 1723812222.925313054
	I0816 05:43:42.836417    3700 fix.go:229] Guest: 2024-08-16 05:43:42.925313054 -0700 PDT Remote: 2024-08-16 05:43:42.776654 -0700 PDT m=+38.247448415 (delta=148.659054ms)
	I0816 05:43:42.836434    3700 fix.go:200] guest clock delta is within tolerance: 148.659054ms
	I0816 05:43:42.836437    3700 start.go:83] releasing machines lock for "ha-073000", held for 37.881174383s
	I0816 05:43:42.836457    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.836598    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:42.836699    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837049    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837160    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:43:42.837249    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:43:42.837284    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837297    3700 ssh_runner.go:195] Run: cat /version.json
	I0816 05:43:42.837308    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:43:42.837399    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837413    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:43:42.837511    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837521    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:43:42.837609    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837623    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:43:42.837690    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.837711    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:43:42.913686    3700 ssh_runner.go:195] Run: systemctl --version
	I0816 05:43:42.918889    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 05:43:42.923312    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:43:42.923351    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:43:42.935697    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:43:42.935707    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:42.935801    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:42.953681    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:43:42.962535    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:43:42.971266    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:43:42.971307    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:43:42.979934    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:42.988664    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:43:42.997290    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:43:43.005918    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:43:43.014721    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:43:43.023404    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:43:43.032084    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:43:43.040766    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:43:43.048727    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:43:43.056628    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.160133    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:43:43.175551    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:43:43.175624    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:43:43.187204    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.198626    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:43:43.214407    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:43:43.226374    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.237460    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:43:43.257683    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:43:43.271060    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:43:43.289045    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:43:43.291949    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:43:43.299258    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:43:43.312470    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:43:43.422601    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:43:43.528683    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:43:43.528764    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:43:43.542650    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:43.653228    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:43:46.028721    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.375520385s)
	I0816 05:43:46.028781    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:43:46.040150    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.049993    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:43:46.143000    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:43:46.256755    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.354748    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:43:46.369090    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:43:46.380481    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.481851    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:43:46.546753    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:43:46.546835    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:43:46.551170    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:43:46.551219    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:43:46.554224    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:43:46.581136    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:43:46.581204    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.600242    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:43:46.641436    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:43:46.641483    3700 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:43:46.641865    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:43:46.646502    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.656383    3700 kubeadm.go:883] updating cluster {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 05:43:46.656461    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:46.656510    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.670426    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.670438    3700 docker.go:615] Images already preloaded, skipping extraction
	I0816 05:43:46.670515    3700 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 05:43:46.682547    3700 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 05:43:46.682568    3700 cache_images.go:84] Images are preloaded, skipping loading
	I0816 05:43:46.682577    3700 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0816 05:43:46.682650    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:43:46.682717    3700 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 05:43:46.717612    3700 cni.go:84] Creating CNI manager for ""
	I0816 05:43:46.717631    3700 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0816 05:43:46.717641    3700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 05:43:46.717661    3700 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-073000 NodeName:ha-073000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 05:43:46.717752    3700 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-073000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 05:43:46.717766    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:43:46.717818    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:43:46.732805    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:43:46.732879    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:43:46.732932    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:43:46.744741    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:43:46.744797    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0816 05:43:46.752198    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0816 05:43:46.766525    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:43:46.779788    3700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0816 05:43:46.793230    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:43:46.806345    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:43:46.809072    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:43:46.818297    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:43:46.921223    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:43:46.935952    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.5
	I0816 05:43:46.935964    3700 certs.go:194] generating shared ca certs ...
	I0816 05:43:46.935976    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.936150    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:43:46.936228    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:43:46.936237    3700 certs.go:256] generating profile certs ...
	I0816 05:43:46.936323    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:43:46.936347    3700 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e
	I0816 05:43:46.936361    3700 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0816 05:43:46.977158    3700 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e ...
	I0816 05:43:46.977174    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e: {Name:mk8d6f44d0e237393798a574888fbd7c16b75ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977520    3700 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e ...
	I0816 05:43:46.977530    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e: {Name:mk0b98c1e535c8fd1781c44e6f22509b6b916e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:46.977744    3700 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt
	I0816 05:43:46.977955    3700 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.0140b12e -> /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key
	I0816 05:43:46.978212    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:43:46.978221    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:43:46.978248    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:43:46.978268    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:43:46.978286    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:43:46.978305    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:43:46.978327    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:43:46.978345    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:43:46.978363    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:43:46.978461    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:43:46.978507    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:43:46.978516    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:43:46.978550    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:43:46.978580    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:43:46.978610    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:43:46.978674    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:43:46.978708    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:46.978729    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:43:46.978748    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:43:46.979212    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:43:47.006926    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:43:47.033030    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:43:47.064204    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:43:47.096328    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:43:47.140607    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:43:47.183767    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:43:47.225875    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:43:47.272651    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:43:47.321871    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:43:47.361863    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:43:47.392530    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 05:43:47.413203    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:43:47.419281    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:43:47.429288    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437638    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.437698    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:43:47.445809    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:43:47.456922    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:43:47.468355    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473399    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.473439    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:43:47.477636    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:43:47.487065    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:43:47.496174    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499485    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.499517    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:43:47.503664    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:43:47.512642    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:43:47.516083    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:43:47.520349    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:43:47.524473    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:43:47.528697    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:43:47.532807    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:43:47.536987    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:43:47.541120    3700 kubeadm.go:392] StartCluster: {Name:ha-073000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:43:47.541240    3700 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 05:43:47.554428    3700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 05:43:47.562930    3700 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 05:43:47.562950    3700 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 05:43:47.563002    3700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 05:43:47.571138    3700 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:43:47.571458    3700 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-073000" does not appear in /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.571540    3700 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1009/kubeconfig needs updating (will repair): [kubeconfig missing "ha-073000" cluster setting kubeconfig missing "ha-073000" context setting]
	I0816 05:43:47.571730    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.572561    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.572756    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 05:43:47.573052    3700 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 05:43:47.573236    3700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 05:43:47.581202    3700 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0816 05:43:47.581215    3700 kubeadm.go:597] duration metric: took 18.259849ms to restartPrimaryControlPlane
	I0816 05:43:47.581220    3700 kubeadm.go:394] duration metric: took 40.104743ms to StartCluster
	I0816 05:43:47.581228    3700 settings.go:142] acquiring lock: {Name:mkb3c8aac25c21025142737c3a236d96f65e9fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581298    3700 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:43:47.581626    3700 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:43:47.581845    3700 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:43:47.581857    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:43:47.581865    3700 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 05:43:47.581987    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.603872    3700 out.go:177] * Enabled addons: 
	I0816 05:43:47.645376    3700 addons.go:510] duration metric: took 63.484341ms for enable addons: enabled=[]
	I0816 05:43:47.645417    3700 start.go:246] waiting for cluster config update ...
	I0816 05:43:47.645429    3700 start.go:255] writing updated cluster config ...
	I0816 05:43:47.667512    3700 out.go:201] 
	I0816 05:43:47.689977    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:47.690106    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.712362    3700 out.go:177] * Starting "ha-073000-m02" control-plane node in "ha-073000" cluster
	I0816 05:43:47.754492    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:43:47.754528    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:43:47.754704    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:43:47.754723    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:43:47.754841    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.755736    3700 start.go:360] acquireMachinesLock for ha-073000-m02: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:43:47.755840    3700 start.go:364] duration metric: took 80.235µs to acquireMachinesLock for "ha-073000-m02"
	I0816 05:43:47.755868    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:43:47.755877    3700 fix.go:54] fixHost starting: m02
	I0816 05:43:47.756330    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:47.756357    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:47.765501    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0816 05:43:47.765944    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:47.766357    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:43:47.766399    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:47.766686    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:47.766840    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.766960    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:43:47.767043    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.767163    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3630
	I0816 05:43:47.768076    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.768103    3700 fix.go:112] recreateIfNeeded on ha-073000-m02: state=Stopped err=<nil>
	I0816 05:43:47.768113    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	W0816 05:43:47.768243    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:43:47.789281    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m02" ...
	I0816 05:43:47.810495    3700 main.go:141] libmachine: (ha-073000-m02) Calling .Start
	I0816 05:43:47.810746    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.810809    3700 main.go:141] libmachine: (ha-073000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid
	I0816 05:43:47.812579    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:47.812591    3700 main.go:141] libmachine: (ha-073000-m02) DBG | pid 3630 is in state "Stopped"
	I0816 05:43:47.812606    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid...
	I0816 05:43:47.812915    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Using UUID 2ecbd3fa-135d-470f-9281-b78e2fd82941
	I0816 05:43:47.840853    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Generated MAC 3a:16:de:25:18:f9
	I0816 05:43:47.840881    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:43:47.841024    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841093    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2ecbd3fa-135d-470f-9281-b78e2fd82941", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:43:47.841131    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2ecbd3fa-135d-470f-9281-b78e2fd82941", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:43:47.841173    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2ecbd3fa-135d-470f-9281-b78e2fd82941 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/ha-073000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:43:47.841196    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:43:47.842666    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 DEBUG: hyperkit: Pid is 3719
	I0816 05:43:47.842981    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Attempt 0
	I0816 05:43:47.843001    3700 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:47.843149    3700 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3719
	I0816 05:43:47.845190    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Searching for 3a:16:de:25:18:f9 in /var/db/dhcpd_leases ...
	I0816 05:43:47.845245    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:43:47.845265    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:43:47.845294    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:43:47.845311    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:43:47.845326    3700 main.go:141] libmachine: (ha-073000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09a2e}
	I0816 05:43:47.845337    3700 main.go:141] libmachine: (ha-073000-m02) DBG | Found match: 3a:16:de:25:18:f9
	I0816 05:43:47.845356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetConfigRaw
	I0816 05:43:47.845364    3700 main.go:141] libmachine: (ha-073000-m02) DBG | IP: 192.169.0.6
	I0816 05:43:47.846051    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:47.846244    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:43:47.846807    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:43:47.846817    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:47.846948    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:47.847069    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:47.847170    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:47.847430    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:47.847576    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:47.847744    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:47.847752    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:43:47.850417    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:43:47.859543    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:43:47.860408    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:47.860422    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:47.860431    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:47.860467    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:47 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.243712    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:43:48.243733    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:43:48.358576    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:43:48.358605    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:43:48.358619    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:43:48.358631    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:43:48.359399    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:43:48.359409    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:48 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:43:53.958154    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 05:43:53.958240    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 05:43:53.958254    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 05:43:53.983312    3700 main.go:141] libmachine: (ha-073000-m02) DBG | 2024/08/16 05:43:53 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 05:43:58.907700    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:43:58.907714    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907845    3700 buildroot.go:166] provisioning hostname "ha-073000-m02"
	I0816 05:43:58.907881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:58.907973    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.908072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.908173    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.908356    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.908483    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.908630    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.908640    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m02 && echo "ha-073000-m02" | sudo tee /etc/hostname
	I0816 05:43:58.968566    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m02
	
	I0816 05:43:58.968580    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:58.968718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:58.968818    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.968913    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:58.969011    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:58.969127    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:58.969267    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:58.969280    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:43:59.024122    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:43:59.024136    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:43:59.024144    3700 buildroot.go:174] setting up certificates
	I0816 05:43:59.024150    3700 provision.go:84] configureAuth start
	I0816 05:43:59.024156    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetMachineName
	I0816 05:43:59.024280    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:43:59.024383    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.024473    3700 provision.go:143] copyHostCerts
	I0816 05:43:59.024501    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024550    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:43:59.024556    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:43:59.024690    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:43:59.024885    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.024915    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:43:59.024920    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:43:59.025027    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:43:59.025190    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025223    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:43:59.025228    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:43:59.025295    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:43:59.025446    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m02 san=[127.0.0.1 192.169.0.6 ha-073000-m02 localhost minikube]
	I0816 05:43:59.071749    3700 provision.go:177] copyRemoteCerts
	I0816 05:43:59.071798    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:43:59.071819    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.071951    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.072035    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.072105    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.072191    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:43:59.104582    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:43:59.104649    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:43:59.123906    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:43:59.123983    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:43:59.142982    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:43:59.143045    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:43:59.162066    3700 provision.go:87] duration metric: took 137.911741ms to configureAuth
	I0816 05:43:59.162078    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:43:59.162258    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:59.162271    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:43:59.162402    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.162489    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.162572    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162650    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.162733    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.162851    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.162983    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.162993    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:43:59.210853    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:43:59.210865    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:43:59.210945    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:43:59.210957    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.211118    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.211219    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211310    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.211387    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.211514    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.211649    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.211694    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:43:59.271471    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:43:59.271487    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:43:59.271647    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:43:59.271739    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271846    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:43:59.271935    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:43:59.272053    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:43:59.272197    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:43:59.272208    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:00.927291    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:00.927305    3700 machine.go:96] duration metric: took 13.080748192s to provisionDockerMachine
	I0816 05:44:00.927312    3700 start.go:293] postStartSetup for "ha-073000-m02" (driver="hyperkit")
	I0816 05:44:00.927320    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:00.927330    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:00.927511    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:00.927525    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:00.927652    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:00.927731    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:00.927829    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:00.927905    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:00.960594    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:00.964512    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:00.964524    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:00.964627    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:00.964771    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:00.964778    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:00.964934    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:00.975551    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:01.005513    3700 start.go:296] duration metric: took 78.192885ms for postStartSetup
	I0816 05:44:01.005559    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.005745    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:01.005758    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.005896    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.005983    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.006072    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.006164    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.040756    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:01.040818    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:01.075264    3700 fix.go:56] duration metric: took 13.319647044s for fixHost
	I0816 05:44:01.075289    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.075435    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.075528    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075613    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.075718    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.075847    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:01.075998    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0816 05:44:01.076006    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:01.125972    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812240.969906147
	
	I0816 05:44:01.125983    3700 fix.go:216] guest clock: 1723812240.969906147
	I0816 05:44:01.125988    3700 fix.go:229] Guest: 2024-08-16 05:44:00.969906147 -0700 PDT Remote: 2024-08-16 05:44:01.075279 -0700 PDT m=+56.546434198 (delta=-105.372853ms)
	I0816 05:44:01.125998    3700 fix.go:200] guest clock delta is within tolerance: -105.372853ms
	I0816 05:44:01.126002    3700 start.go:83] releasing machines lock for "ha-073000-m02", held for 13.370412469s
	I0816 05:44:01.126019    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.126142    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:01.147724    3700 out.go:177] * Found network options:
	I0816 05:44:01.167556    3700 out.go:177]   - NO_PROXY=192.169.0.5
	W0816 05:44:01.188682    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.188720    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189649    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189881    3700 main.go:141] libmachine: (ha-073000-m02) Calling .DriverName
	I0816 05:44:01.189985    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:01.190020    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	W0816 05:44:01.190130    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:01.190184    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190263    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:01.190286    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHHostname
	I0816 05:44:01.190352    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190515    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.190522    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHPort
	I0816 05:44:01.190713    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	I0816 05:44:01.190722    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHKeyPath
	I0816 05:44:01.190891    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetSSHUsername
	I0816 05:44:01.191002    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m02/id_rsa Username:docker}
	W0816 05:44:01.219673    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:01.219729    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:01.266010    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:01.266030    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.266137    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.282065    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:01.291072    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:01.299924    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.299972    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:01.308888    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.317715    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:01.326478    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:01.335362    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:01.344565    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:01.353443    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:01.362391    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:01.371153    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:01.379211    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:01.387397    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.485288    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:01.504163    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:01.504230    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:01.519289    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.533468    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:01.549919    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:01.560311    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.570439    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:01.589516    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:01.599936    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:01.614849    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:01.617987    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:01.625242    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:01.638690    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:01.731621    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:01.840350    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:01.840371    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:01.854317    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:01.960384    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:44:04.269941    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.309582879s)
	I0816 05:44:04.270007    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 05:44:04.280320    3700 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 05:44:04.292872    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.303371    3700 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 05:44:04.393390    3700 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 05:44:04.502895    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.604917    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 05:44:04.618462    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 05:44:04.629172    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:04.732241    3700 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 05:44:04.796052    3700 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 05:44:04.796135    3700 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 05:44:04.800527    3700 start.go:563] Will wait 60s for crictl version
	I0816 05:44:04.800578    3700 ssh_runner.go:195] Run: which crictl
	I0816 05:44:04.803568    3700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 05:44:04.832000    3700 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 05:44:04.832069    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.850869    3700 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 05:44:04.890177    3700 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 05:44:04.933118    3700 out.go:177]   - env NO_PROXY=192.169.0.5
	I0816 05:44:04.954934    3700 main.go:141] libmachine: (ha-073000-m02) Calling .GetIP
	I0816 05:44:04.955381    3700 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 05:44:04.959881    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:04.969321    3700 mustload.go:65] Loading cluster: ha-073000
	I0816 05:44:04.969488    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:04.969741    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.969756    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.978313    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52098
	I0816 05:44:04.978649    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.979005    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.979022    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.979231    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.979362    3700 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:44:04.979460    3700 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:04.979527    3700 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3714
	I0816 05:44:04.980457    3700 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:44:04.980703    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:04.980719    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:04.989380    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52100
	I0816 05:44:04.989872    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:04.990229    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:04.990239    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:04.990441    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:04.990567    3700 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:44:04.990667    3700 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000 for IP: 192.169.0.6
	I0816 05:44:04.990673    3700 certs.go:194] generating shared ca certs ...
	I0816 05:44:04.990681    3700 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 05:44:04.990819    3700 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 05:44:04.990876    3700 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 05:44:04.990885    3700 certs.go:256] generating profile certs ...
	I0816 05:44:04.990968    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key
	I0816 05:44:04.991052    3700 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key.852e3a00
	I0816 05:44:04.991104    3700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key
	I0816 05:44:04.991115    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 05:44:04.991137    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 05:44:04.991158    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 05:44:04.991181    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 05:44:04.991203    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 05:44:04.991224    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 05:44:04.991243    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 05:44:04.991260    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 05:44:04.991336    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 05:44:04.991373    3700 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 05:44:04.991382    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 05:44:04.991415    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 05:44:04.991446    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 05:44:04.991475    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 05:44:04.991545    3700 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:04.991577    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 05:44:04.991598    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:04.991616    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 05:44:04.991641    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:44:04.991732    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:44:04.991816    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:44:04.991895    3700 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:44:04.991976    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:44:05.018674    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0816 05:44:05.021887    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0816 05:44:05.030501    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0816 05:44:05.033440    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0816 05:44:05.041955    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0816 05:44:05.044846    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0816 05:44:05.053721    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0816 05:44:05.056775    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0816 05:44:05.065337    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0816 05:44:05.068254    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0816 05:44:05.076761    3700 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0816 05:44:05.079704    3700 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0816 05:44:05.088144    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 05:44:05.108529    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 05:44:05.128319    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 05:44:05.148205    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 05:44:05.168044    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 05:44:05.187959    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 05:44:05.207850    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 05:44:05.227864    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 05:44:05.247806    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 05:44:05.267586    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 05:44:05.287321    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 05:44:05.307517    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0816 05:44:05.321001    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0816 05:44:05.334635    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0816 05:44:05.348115    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0816 05:44:05.361521    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0816 05:44:05.375128    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0816 05:44:05.388391    3700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0816 05:44:05.402014    3700 ssh_runner.go:195] Run: openssl version
	I0816 05:44:05.406108    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 05:44:05.414347    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417650    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.417685    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 05:44:05.421754    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 05:44:05.429962    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 05:44:05.438138    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441411    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.441444    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 05:44:05.445615    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 05:44:05.453740    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 05:44:05.462021    3700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465413    3700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.465453    3700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 05:44:05.469602    3700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 05:44:05.477722    3700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 05:44:05.481045    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 05:44:05.485278    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 05:44:05.489478    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 05:44:05.493769    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 05:44:05.497993    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 05:44:05.502305    3700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 05:44:05.506534    3700 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0816 05:44:05.506585    3700 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-073000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-073000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 05:44:05.506599    3700 kube-vip.go:115] generating kube-vip config ...
	I0816 05:44:05.506631    3700 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0816 05:44:05.518840    3700 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0816 05:44:05.518872    3700 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0816 05:44:05.518938    3700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 05:44:05.527488    3700 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 05:44:05.527548    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0816 05:44:05.535755    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0816 05:44:05.549218    3700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 05:44:05.562474    3700 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0816 05:44:05.575901    3700 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0816 05:44:05.578825    3700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 05:44:05.588727    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.694671    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.710202    3700 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 05:44:05.710412    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:05.731897    3700 out.go:177] * Verifying Kubernetes components...
	I0816 05:44:05.773259    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:05.888127    3700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 05:44:05.905019    3700 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:44:05.905207    3700 kapi.go:59] client config for ha-073000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xb3b9f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0816 05:44:05.905240    3700 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0816 05:44:05.905409    3700 node_ready.go:35] waiting up to 6m0s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:05.905490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:05.905495    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:05.905503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:05.905507    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.204653    3700 round_trippers.go:574] Response Status: 200 OK in 8299 milliseconds
	I0816 05:44:14.205215    3700 node_ready.go:49] node "ha-073000-m02" has status "Ready":"True"
	I0816 05:44:14.205228    3700 node_ready.go:38] duration metric: took 8.299966036s for node "ha-073000-m02" to be "Ready" ...
	I0816 05:44:14.205235    3700 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:14.205277    3700 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 05:44:14.205286    3700 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 05:44:14.205323    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:14.205327    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.205333    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.205336    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.223247    3700 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0816 05:44:14.231122    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.231187    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-2fdpw
	I0816 05:44:14.231192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.231198    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.231208    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.240205    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:14.240681    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.240689    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.240695    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.240699    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.247571    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:14.247971    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.247981    3700 pod_ready.go:82] duration metric: took 16.842454ms for pod "coredns-6f6b679f8f-2fdpw" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.247988    3700 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.248023    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-vf22s
	I0816 05:44:14.248028    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.248034    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.248038    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252093    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.252500    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.252508    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.252513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.252516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.255102    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.255471    3700 pod_ready.go:93] pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.255482    3700 pod_ready.go:82] duration metric: took 7.488195ms for pod "coredns-6f6b679f8f-vf22s" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255489    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.255538    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000
	I0816 05:44:14.255543    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.255549    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.255554    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257423    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.257786    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.257793    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.257798    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.257802    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:14.261582    3700 pod_ready.go:93] pod "etcd-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.261592    3700 pod_ready.go:82] duration metric: took 6.098581ms for pod "etcd-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261599    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.261644    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m02
	I0816 05:44:14.261649    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.261654    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.261658    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.264072    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.264627    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:14.264635    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.264640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.264645    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.267306    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.267636    3700 pod_ready.go:93] pod "etcd-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.267645    3700 pod_ready.go:82] duration metric: took 6.041319ms for pod "etcd-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267652    3700 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.267706    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-073000-m03
	I0816 05:44:14.267711    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.267716    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.267726    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.269558    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:14.406286    3700 request.go:632] Waited for 136.053726ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406320    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:14.406325    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.406330    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.406334    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.412790    3700 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0816 05:44:14.412989    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413000    3700 pod_ready.go:82] duration metric: took 145.343663ms for pod "etcd-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:14.413019    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "etcd-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:14.413037    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.606275    3700 request.go:632] Waited for 193.204942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606325    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000
	I0816 05:44:14.606330    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.606342    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.606346    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.611263    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:14.806263    3700 request.go:632] Waited for 194.483786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806300    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:14.806306    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:14.806312    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:14.806316    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:14.808457    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:14.809016    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:14.809026    3700 pod_ready.go:82] duration metric: took 395.988936ms for pod "kube-apiserver-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:14.809033    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:15.005594    3700 request.go:632] Waited for 196.505275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005624    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.005630    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.005637    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.005640    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.010212    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.206584    3700 request.go:632] Waited for 195.946236ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206645    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.206685    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.206691    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.206695    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.211350    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:15.405410    3700 request.go:632] Waited for 95.393387ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405469    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.405474    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.405479    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.405483    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.408080    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.605592    3700 request.go:632] Waited for 196.04685ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605628    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:15.605634    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.605640    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.605644    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.607860    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:15.810998    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:15.811014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:15.811021    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:15.811029    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:15.813293    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:16.005626    3700 request.go:632] Waited for 191.969847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005743    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.005754    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.005765    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.005773    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.008807    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.309801    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.309825    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.309836    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.309844    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.313121    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.407323    3700 request.go:632] Waited for 93.416086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407387    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.407397    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.407409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.407424    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.410882    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.810461    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:16.810486    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.810498    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.810504    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.813546    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:16.814282    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:16.814289    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:16.814295    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:16.814298    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:16.816149    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:16.816456    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:17.309900    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.309921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.309932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.309937    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.312735    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:17.313209    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.313218    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.313223    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.313233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.314796    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:17.809685    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:17.809718    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.809758    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.809767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.813579    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:17.814147    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:17.814157    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:17.814165    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:17.814169    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:17.815986    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.309824    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.309839    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.309845    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.309850    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.312500    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:18.312950    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.312958    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.312964    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.312968    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.317556    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.811340    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:18.811362    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.811380    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.811389    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.815578    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:18.816331    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:18.816338    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:18.816343    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:18.816347    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:18.818287    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:18.818637    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:19.309154    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.309213    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.309226    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.309244    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.313107    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.313580    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.313589    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.313597    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.313601    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.315208    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:19.810298    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:19.810320    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.810332    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.810338    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.813934    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:19.814561    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:19.814571    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:19.814579    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:19.814589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:19.816289    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.309290    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.309312    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.309322    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.309328    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.313244    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.313715    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.313724    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.313731    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.313737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.315554    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:20.809680    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:20.809710    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.809723    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.809735    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.813009    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:20.813665    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:20.813674    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:20.813682    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:20.813686    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:20.815508    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.309619    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.309640    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.309667    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.309675    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.313585    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.314167    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.314174    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.314179    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.314182    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.315676    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:21.316053    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:21.809228    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:21.809250    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.809261    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.809267    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.812952    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:21.813489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:21.813500    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:21.813508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:21.813512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:21.815094    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.310261    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.310287    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.310299    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.310305    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.314627    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:22.314992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.314999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.315005    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.315008    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.316747    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:22.810493    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:22.810515    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.810526    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.810532    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814082    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:22.814652    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:22.814660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:22.814666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:22.814670    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:22.816180    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.310190    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.310217    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.310228    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.310235    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.314496    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:23.314922    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.314929    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.314935    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.314939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.316481    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.316841    3700 pod_ready.go:103] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:23.809175    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:23.809187    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.809202    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.809207    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.811160    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:23.811560    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:23.811568    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:23.811574    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:23.811578    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:23.814714    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.309762    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m02
	I0816 05:44:24.309784    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.309796    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.309802    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.313492    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.314086    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.314097    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.314106    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.314111    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.315684    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.316026    3700 pod_ready.go:93] pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:24.316036    3700 pod_ready.go:82] duration metric: took 9.507184684s for pod "kube-apiserver-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316045    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.316078    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-073000-m03
	I0816 05:44:24.316086    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.316091    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.316095    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.317489    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.317864    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:24.317872    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.317877    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.317881    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.319230    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:24.319275    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319288    3700 pod_ready.go:82] duration metric: took 3.236554ms for pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.319295    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-apiserver-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:24.319299    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.319330    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000
	I0816 05:44:24.319335    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.319340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.319344    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.320953    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.321429    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:24.321437    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.321442    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.321446    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.322965    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.323320    3700 pod_ready.go:98] node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323329    3700 pod_ready.go:82] duration metric: took 4.023708ms for pod "kube-controller-manager-ha-073000" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:24.323334    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000" hosting pod "kube-controller-manager-ha-073000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-073000" has status "Ready":"False"
	I0816 05:44:24.323339    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:24.323367    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.323371    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.323379    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.323384    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.324781    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.325216    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.325223    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.325229    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.325233    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.326748    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:24.824459    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:24.824484    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.824494    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.824506    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828252    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:24.828701    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:24.828712    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:24.828719    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:24.828723    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:24.830277    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.323827    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.323852    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.323864    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.323877    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327155    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:25.327737    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.327744    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.327750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.327754    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.329624    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:25.824109    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:25.824127    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.824136    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.824142    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.826476    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:25.827100    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:25.827108    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:25.827113    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:25.827117    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:25.828738    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.323567    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.323611    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.323617    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.323621    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.325453    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.325886    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.325894    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.325900    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.325904    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.327286    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:26.327610    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:26.823816    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:26.823841    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.823852    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.823860    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.827261    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:26.827821    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:26.827831    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:26.827839    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:26.827844    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:26.829686    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.323970    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.323996    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.324008    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.324015    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.327573    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.328047    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.328056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.328063    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.328067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.329875    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:27.823992    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:27.824023    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.824082    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.824091    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.827309    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:27.827980    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:27.827987    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:27.827993    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:27.827998    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:27.829445    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:28.324903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.324920    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.324929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.324933    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.327085    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.327489    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.327497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.327503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.327506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.329732    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:28.330047    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:28.823366    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:28.823382    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.823401    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.823422    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.846246    3700 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0816 05:44:28.846781    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:28.846789    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:28.846795    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:28.846803    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:28.855350    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:29.324024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.324057    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.324064    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.324067    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.326984    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.327546    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.327553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.327559    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.327563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.330445    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:29.824279    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:29.824299    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.824306    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.824310    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.827888    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:29.828505    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:29.828512    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:29.828518    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:29.828522    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:29.830193    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.323608    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.323627    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.323635    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.323639    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327262    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:30.327789    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.327798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.327803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.327807    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.329683    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:30.330034    3700 pod_ready.go:103] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:30.823965    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:30.823999    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.824072    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.824083    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.828534    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:30.829026    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:30.829034    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:30.829040    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:30.829044    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:30.830921    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.324089    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m02
	I0816 05:44:31.324113    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.324130    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.324137    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.328896    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:31.329571    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:31.329579    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.329585    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.329589    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.331878    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:31.332446    3700 pod_ready.go:93] pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:31.332455    3700 pod_ready.go:82] duration metric: took 7.009249215s for pod "kube-controller-manager-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332462    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.332502    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-073000-m03
	I0816 05:44:31.332507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.332512    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.332516    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.334084    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.334465    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:31.334472    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.334477    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.334480    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.335893    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:31.335965    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335979    3700 pod_ready.go:82] duration metric: took 3.51153ms for pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:31.335986    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-controller-manager-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:31.335991    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:31.336024    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.336029    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.336035    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.336038    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.337516    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.338235    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.338242    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.338248    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.338254    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.339975    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:31.837844    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:31.837869    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.837881    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.837927    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841316    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:31.841903    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:31.841910    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:31.841916    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:31.841919    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:31.843493    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.336771    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.336798    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.336809    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.336816    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.340935    3700 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0816 05:44:32.341412    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.341420    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.341426    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.341429    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.342957    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:32.838157    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:32.838212    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.838225    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.838232    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.841711    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:32.842249    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:32.842259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:32.842267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:32.842272    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:32.843815    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.337329    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.337354    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.337366    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.337372    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.341232    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.341870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.341877    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.341883    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.341887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.343419    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:33.343689    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:33.836128    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:33.836154    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.836164    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.836170    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.840006    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:33.840641    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:33.840650    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:33.840658    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:33.840663    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:33.842504    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.337618    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.337683    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.337693    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.337698    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.339996    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.340499    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.340507    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.340513    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.340517    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.342040    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:34.836185    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:34.836258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.836268    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.836274    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.838913    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:34.839391    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:34.839398    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:34.839404    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:34.839409    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:34.840888    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:35.336164    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.336192    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.336242    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.336256    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.344590    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:35.345076    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.345083    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.345089    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.345106    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.351725    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:35.352079    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:35.838186    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:35.838207    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.838219    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.838225    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.841779    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:35.842361    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:35.842368    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:35.842373    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:35.842376    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:35.844076    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.336349    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.336372    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.336387    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.336393    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.339759    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.340248    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.340258    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.340267    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.340273    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.341840    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:36.836286    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:36.836309    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.836320    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.836326    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.839632    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:36.840490    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:36.840497    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:36.840503    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:36.840506    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:36.842131    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.337695    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.337717    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.337729    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.337736    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.341389    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.341954    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.341964    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.341972    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.341977    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.343432    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.837030    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:37.837056    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.837073    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.837092    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.840202    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:37.840916    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:37.840924    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:37.840929    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:37.840934    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:37.842593    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:37.843036    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:38.336396    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.336421    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.336432    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.336441    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.340051    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.340807    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.340818    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.340826    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.340831    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.342328    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:38.836968    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:38.836993    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.837004    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.837009    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.840369    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:38.840942    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:38.840953    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:38.840961    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:38.840966    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:38.842959    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.337347    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.337374    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.337385    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.337391    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.340872    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.341545    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.341553    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.341560    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.341563    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.343528    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:39.836514    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:39.836585    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.836604    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.836610    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.839854    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:39.840266    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:39.840275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:39.840282    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:39.840287    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:39.841976    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.337117    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.337140    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.337151    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.337157    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.340623    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:40.341081    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.341089    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.341095    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.341099    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.342480    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:40.342868    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:40.836255    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:40.836275    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.836287    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.836294    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.839119    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:40.839650    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:40.839660    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:40.839666    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:40.839671    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:40.841284    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.336308    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.336328    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.336340    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.336356    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.339424    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.339982    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.339990    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.339995    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.339999    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.341644    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:41.837468    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:41.837489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.837501    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.837508    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.841276    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:41.842038    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:41.842045    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:41.842051    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:41.842055    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:41.843559    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.336716    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.336731    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.336737    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.336740    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.342330    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:42.343356    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.343364    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.343370    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.343373    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.351635    3700 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0816 05:44:42.352611    3700 pod_ready.go:103] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"False"
	I0816 05:44:42.836673    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nsmz
	I0816 05:44:42.836700    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.836711    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.836719    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.840138    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:42.840742    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.840753    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.840762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.840767    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.842386    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.842727    3700 pod_ready.go:93] pod "kube-proxy-6nsmz" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.842736    3700 pod_ready.go:82] duration metric: took 11.506966083s for pod "kube-proxy-6nsmz" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842743    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.842773    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c27jt
	I0816 05:44:42.842778    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.842783    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.842788    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.844352    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.844828    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:42.844835    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.844841    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.844845    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.846280    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.846557    3700 pod_ready.go:93] pod "kube-proxy-c27jt" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.846565    3700 pod_ready.go:82] duration metric: took 3.817397ms for pod "kube-proxy-c27jt" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846572    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.846601    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tr2c8
	I0816 05:44:42.846605    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.846612    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.846615    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.848062    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.848495    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:42.848503    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.848509    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.848512    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.849798    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:42.849858    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849868    3700 pod_ready.go:82] duration metric: took 3.291408ms for pod "kube-proxy-tr2c8" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:42.849874    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-proxy-tr2c8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:42.849879    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.849912    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wcgdv
	I0816 05:44:42.849917    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.849922    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.849925    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.851357    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.851732    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m04
	I0816 05:44:42.851740    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.851745    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.851750    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.853123    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.853436    3700 pod_ready.go:93] pod "kube-proxy-wcgdv" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.853444    3700 pod_ready.go:82] duration metric: took 3.55866ms for pod "kube-proxy-wcgdv" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853450    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.853478    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000
	I0816 05:44:42.853482    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.853488    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.853492    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.854845    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.855143    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000
	I0816 05:44:42.855150    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:42.855155    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:42.855160    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:42.856490    3700 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 05:44:42.856772    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:42.856781    3700 pod_ready.go:82] duration metric: took 3.32627ms for pod "kube-scheduler-ha-073000" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:42.856793    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.037823    3700 request.go:632] Waited for 180.948071ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037884    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m02
	I0816 05:44:43.037896    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.037908    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.037918    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.041274    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.238753    3700 request.go:632] Waited for 196.999605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238909    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m02
	I0816 05:44:43.238921    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.238932    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.238939    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.242465    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:43.242850    3700 pod_ready.go:93] pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace has status "Ready":"True"
	I0816 05:44:43.242864    3700 pod_ready.go:82] duration metric: took 386.071689ms for pod "kube-scheduler-ha-073000-m02" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.242873    3700 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	I0816 05:44:43.436910    3700 request.go:632] Waited for 193.992761ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437002    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-073000-m03
	I0816 05:44:43.437014    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.437025    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.437033    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.439940    3700 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 05:44:43.637222    3700 request.go:632] Waited for 196.770029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637254    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-073000-m03
	I0816 05:44:43.637259    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.637265    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.637270    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.638883    3700 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0816 05:44:43.638942    3700 pod_ready.go:98] node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638952    3700 pod_ready.go:82] duration metric: took 396.081081ms for pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace to be "Ready" ...
	E0816 05:44:43.638959    3700 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-073000-m03" hosting pod "kube-scheduler-ha-073000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-073000-m03": nodes "ha-073000-m03" not found
	I0816 05:44:43.638964    3700 pod_ready.go:39] duration metric: took 29.434296561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 05:44:43.638986    3700 api_server.go:52] waiting for apiserver process to appear ...
	I0816 05:44:43.639045    3700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:44:43.650685    3700 api_server.go:72] duration metric: took 37.941199778s to wait for apiserver process to appear ...
	I0816 05:44:43.650696    3700 api_server.go:88] waiting for apiserver healthz status ...
	I0816 05:44:43.650717    3700 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0816 05:44:43.653719    3700 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0816 05:44:43.653750    3700 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0816 05:44:43.653755    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.653762    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.653766    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.654323    3700 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 05:44:43.654424    3700 api_server.go:141] control plane version: v1.31.0
	I0816 05:44:43.654434    3700 api_server.go:131] duration metric: took 3.733932ms to wait for apiserver health ...
	I0816 05:44:43.654442    3700 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 05:44:43.838724    3700 request.go:632] Waited for 184.226134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838846    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:43.838861    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:43.838873    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:43.838887    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:43.845134    3700 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0816 05:44:43.850534    3700 system_pods.go:59] 26 kube-system pods found
	I0816 05:44:43.850556    3700 system_pods.go:61] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850563    3700 system_pods.go:61] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:43.850568    3700 system_pods.go:61] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:43.850572    3700 system_pods.go:61] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:43.850575    3700 system_pods.go:61] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:43.850577    3700 system_pods.go:61] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:43.850582    3700 system_pods.go:61] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:43.850586    3700 system_pods.go:61] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:43.850589    3700 system_pods.go:61] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:43.850592    3700 system_pods.go:61] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:43.850594    3700 system_pods.go:61] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:43.850597    3700 system_pods.go:61] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:43.850600    3700 system_pods.go:61] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:43.850603    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:43.850605    3700 system_pods.go:61] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:43.850608    3700 system_pods.go:61] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:43.850611    3700 system_pods.go:61] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:43.850613    3700 system_pods.go:61] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:43.850616    3700 system_pods.go:61] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:43.850618    3700 system_pods.go:61] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:43.850623    3700 system_pods.go:61] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:43.850627    3700 system_pods.go:61] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:43.850629    3700 system_pods.go:61] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:43.850632    3700 system_pods.go:61] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:43.850635    3700 system_pods.go:61] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:43.850637    3700 system_pods.go:61] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:43.850641    3700 system_pods.go:74] duration metric: took 196.198757ms to wait for pod list to return data ...
	I0816 05:44:43.850647    3700 default_sa.go:34] waiting for default service account to be created ...
	I0816 05:44:44.037355    3700 request.go:632] Waited for 186.643021ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037480    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0816 05:44:44.037489    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.037499    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.037514    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.040844    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.041066    3700 default_sa.go:45] found service account: "default"
	I0816 05:44:44.041079    3700 default_sa.go:55] duration metric: took 190.431399ms for default service account to be created ...
	I0816 05:44:44.041086    3700 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 05:44:44.237668    3700 request.go:632] Waited for 196.520766ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237780    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0816 05:44:44.237791    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.237803    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.237812    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.243185    3700 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 05:44:44.248662    3700 system_pods.go:86] 26 kube-system pods found
	I0816 05:44:44.248675    3700 system_pods.go:89] "coredns-6f6b679f8f-2fdpw" [5eed297b-a1f8-4042-918d-abbd8cd0c025] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248680    3700 system_pods.go:89] "coredns-6f6b679f8f-vf22s" [b19e457d-d8ad-4a2f-a26d-2c4cce1dd187] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 05:44:44.248688    3700 system_pods.go:89] "etcd-ha-073000" [0e6857f6-85a9-46e7-9333-1a94d3f34283] Running
	I0816 05:44:44.248691    3700 system_pods.go:89] "etcd-ha-073000-m02" [3ab9bac7-feaa-4d06-840e-fb2d7a1b3f33] Running
	I0816 05:44:44.248694    3700 system_pods.go:89] "etcd-ha-073000-m03" [150ba510-542e-455a-bdbe-40d59bb236f1] Running
	I0816 05:44:44.248697    3700 system_pods.go:89] "kindnet-67bkr" [258def2f-5fc5-4c2d-85d4-da467d118328] Running
	I0816 05:44:44.248701    3700 system_pods.go:89] "kindnet-6w49d" [23fd976c-7b24-491f-a8e7-7d01cc0b6f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 05:44:44.248705    3700 system_pods.go:89] "kindnet-hz69v" [f26aff37-8f34-40c6-b855-cf129f5815b0] Running
	I0816 05:44:44.248708    3700 system_pods.go:89] "kindnet-vjtpn" [36bbb18d-a5d8-4c05-a445-8f98ab8a6df2] Running
	I0816 05:44:44.248711    3700 system_pods.go:89] "kube-apiserver-ha-073000" [a172e4ef-7890-4739-bc64-447df4c72600] Running
	I0816 05:44:44.248714    3700 system_pods.go:89] "kube-apiserver-ha-073000-m02" [fdc241cf-42fa-4e6d-a7ac-e33a40022f4f] Running
	I0816 05:44:44.248717    3700 system_pods.go:89] "kube-apiserver-ha-073000-m03" [325ca010-4724-44da-857a-222663447f06] Running
	I0816 05:44:44.248720    3700 system_pods.go:89] "kube-controller-manager-ha-073000" [6f6022a5-1123-442e-a205-62e91704de00] Running
	I0816 05:44:44.248723    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m02" [73a9e9a5-203a-40a8-a374-d871dcdbfff5] Running
	I0816 05:44:44.248726    3700 system_pods.go:89] "kube-controller-manager-ha-073000-m03" [0ca39149-9c6b-4231-ba32-04598623bdb5] Running
	I0816 05:44:44.248728    3700 system_pods.go:89] "kube-proxy-6nsmz" [c0fbbe4a-ce35-4430-a391-8f0fd4cf05b2] Running
	I0816 05:44:44.248731    3700 system_pods.go:89] "kube-proxy-c27jt" [fce39d95-9dd9-4295-82bd-8854aaa318b4] Running
	I0816 05:44:44.248734    3700 system_pods.go:89] "kube-proxy-tr2c8" [7cfcad48-01cf-4960-8625-f6d748e24976] Running
	I0816 05:44:44.248738    3700 system_pods.go:89] "kube-proxy-wcgdv" [b7436811-eaec-4ec1-88db-bad862cdb073] Running
	I0816 05:44:44.248742    3700 system_pods.go:89] "kube-scheduler-ha-073000" [4994655f-03d2-4c9d-aac0-4b892f67f51b] Running
	I0816 05:44:44.248745    3700 system_pods.go:89] "kube-scheduler-ha-073000-m02" [7120f07f-59c1-4067-8781-4940f3638a7d] Running
	I0816 05:44:44.248748    3700 system_pods.go:89] "kube-scheduler-ha-073000-m03" [029587bf-baab-48e9-8801-c50fb5a9ffa6] Running
	I0816 05:44:44.248751    3700 system_pods.go:89] "kube-vip-ha-073000" [3c4ef1ee-8ca4-47e9-b9aa-0dab8676e79d] Running
	I0816 05:44:44.248756    3700 system_pods.go:89] "kube-vip-ha-073000-m02" [69d5cd92-6a90-4902-9c9b-0108b920ec03] Running
	I0816 05:44:44.248759    3700 system_pods.go:89] "kube-vip-ha-073000-m03" [58ee3584-d207-4c48-8e83-0f1841525669] Running
	I0816 05:44:44.248761    3700 system_pods.go:89] "storage-provisioner" [6761bd0b-a562-4194-84a3-81ca426d6708] Running
	I0816 05:44:44.248766    3700 system_pods.go:126] duration metric: took 207.679371ms to wait for k8s-apps to be running ...
	I0816 05:44:44.248773    3700 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 05:44:44.248823    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:44:44.259672    3700 system_svc.go:56] duration metric: took 10.896688ms WaitForService to wait for kubelet
	I0816 05:44:44.259685    3700 kubeadm.go:582] duration metric: took 38.550213651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 05:44:44.259697    3700 node_conditions.go:102] verifying NodePressure condition ...
	I0816 05:44:44.438728    3700 request.go:632] Waited for 178.976716ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438870    3700 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0816 05:44:44.438882    3700 round_trippers.go:469] Request Headers:
	I0816 05:44:44.438928    3700 round_trippers.go:473]     Accept: application/json, */*
	I0816 05:44:44.438938    3700 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 05:44:44.442848    3700 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 05:44:44.443702    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443718    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443727    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443730    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443734    3700 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 05:44:44.443737    3700 node_conditions.go:123] node cpu capacity is 2
	I0816 05:44:44.443741    3700 node_conditions.go:105] duration metric: took 184.043638ms to run NodePressure ...
	I0816 05:44:44.443749    3700 start.go:241] waiting for startup goroutines ...
	I0816 05:44:44.443767    3700 start.go:255] writing updated cluster config ...
	I0816 05:44:44.469062    3700 out.go:201] 
	I0816 05:44:44.489551    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:44.489670    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.512183    3700 out.go:177] * Starting "ha-073000-m04" worker node in "ha-073000" cluster
	I0816 05:44:44.554442    3700 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:44:44.554478    3700 cache.go:56] Caching tarball of preloaded images
	I0816 05:44:44.554690    3700 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 05:44:44.554709    3700 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 05:44:44.554824    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.555855    3700 start.go:360] acquireMachinesLock for ha-073000-m04: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 05:44:44.555978    3700 start.go:364] duration metric: took 98.145µs to acquireMachinesLock for "ha-073000-m04"
	I0816 05:44:44.556004    3700 start.go:96] Skipping create...Using existing machine configuration
	I0816 05:44:44.556011    3700 fix.go:54] fixHost starting: m04
	I0816 05:44:44.556446    3700 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:44:44.556472    3700 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:44:44.565770    3700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52105
	I0816 05:44:44.566121    3700 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:44:44.566496    3700 main.go:141] libmachine: Using API Version  1
	I0816 05:44:44.566517    3700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:44:44.566729    3700 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:44:44.566845    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.566927    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:44:44.567001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.567096    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3643
	I0816 05:44:44.568001    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.568039    3700 fix.go:112] recreateIfNeeded on ha-073000-m04: state=Stopped err=<nil>
	I0816 05:44:44.568049    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	W0816 05:44:44.568121    3700 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 05:44:44.606366    3700 out.go:177] * Restarting existing hyperkit VM for "ha-073000-m04" ...
	I0816 05:44:44.663139    3700 main.go:141] libmachine: (ha-073000-m04) Calling .Start
	I0816 05:44:44.663315    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.663399    3700 main.go:141] libmachine: (ha-073000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid
	I0816 05:44:44.664399    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:44:44.664426    3700 main.go:141] libmachine: (ha-073000-m04) DBG | pid 3643 is in state "Stopped"
	I0816 05:44:44.664490    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid...
	I0816 05:44:44.664647    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Using UUID f2db23bc-c2a0-4ea2-9158-e93c928b5416
	I0816 05:44:44.689456    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Generated MAC f2:da:75:16:53:b7
	I0816 05:44:44.689481    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000
	I0816 05:44:44.689607    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689641    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2db23bc-c2a0-4ea2-9158-e93c928b5416", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 05:44:44.689691    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f2db23bc-c2a0-4ea2-9158-e93c928b5416", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machine
s/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"}
	I0816 05:44:44.689730    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f2db23bc-c2a0-4ea2-9158-e93c928b5416 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/ha-073000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-073000"
	I0816 05:44:44.689749    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 05:44:44.691094    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 DEBUG: hyperkit: Pid is 3728
	I0816 05:44:44.691611    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Attempt 0
	I0816 05:44:44.691627    3700 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:44:44.691728    3700 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3728
	I0816 05:44:44.693940    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Searching for f2:da:75:16:53:b7 in /var/db/dhcpd_leases ...
	I0816 05:44:44.694051    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0816 05:44:44.694092    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 05:44:44.694127    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 05:44:44.694155    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 05:44:44.694170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetConfigRaw
	I0816 05:44:44.694173    3700 main.go:141] libmachine: (ha-073000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09a90}
	I0816 05:44:44.694200    3700 main.go:141] libmachine: (ha-073000-m04) DBG | Found match: f2:da:75:16:53:b7
	I0816 05:44:44.694236    3700 main.go:141] libmachine: (ha-073000-m04) DBG | IP: 192.169.0.8
	I0816 05:44:44.695065    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:44.695278    3700 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/ha-073000/config.json ...
	I0816 05:44:44.695692    3700 machine.go:93] provisionDockerMachine start ...
	I0816 05:44:44.695703    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:44.695833    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:44.695931    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:44.696050    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696166    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:44.696254    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:44.696382    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:44.696563    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:44.696574    3700 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 05:44:44.699477    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 05:44:44.708676    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 05:44:44.709681    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:44.709706    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:44.709738    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:44.709752    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.096454    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 05:44:45.096470    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 05:44:45.211316    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 05:44:45.211336    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 05:44:45.211351    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 05:44:45.211358    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 05:44:45.212223    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 05:44:45.212237    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:45 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 05:44:50.828020    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 05:44:50.828090    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 05:44:50.828101    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 05:44:50.851950    3700 main.go:141] libmachine: (ha-073000-m04) DBG | 2024/08/16 05:44:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 05:44:55.760625    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 05:44:55.760639    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760793    3700 buildroot.go:166] provisioning hostname "ha-073000-m04"
	I0816 05:44:55.760805    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.760899    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.760990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.761085    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761159    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.761232    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.761366    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.761519    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.761528    3700 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-073000-m04 && echo "ha-073000-m04" | sudo tee /etc/hostname
	I0816 05:44:55.833156    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-073000-m04
	
	I0816 05:44:55.833170    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.833308    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:55.833414    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833503    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:55.833603    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:55.833738    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:55.833899    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:55.833910    3700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-073000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-073000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-073000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 05:44:55.900349    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 05:44:55.900365    3700 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 05:44:55.900383    3700 buildroot.go:174] setting up certificates
	I0816 05:44:55.900391    3700 provision.go:84] configureAuth start
	I0816 05:44:55.900398    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetMachineName
	I0816 05:44:55.900534    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:55.900638    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:55.900717    3700 provision.go:143] copyHostCerts
	I0816 05:44:55.900744    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900810    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 05:44:55.900816    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 05:44:55.900947    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 05:44:55.901143    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901190    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 05:44:55.901195    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 05:44:55.901271    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 05:44:55.901417    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901455    3700 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 05:44:55.901460    3700 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 05:44:55.901535    3700 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 05:44:55.901685    3700 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.ha-073000-m04 san=[127.0.0.1 192.169.0.8 ha-073000-m04 localhost minikube]
	I0816 05:44:56.021206    3700 provision.go:177] copyRemoteCerts
	I0816 05:44:56.021264    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 05:44:56.021279    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.021423    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.021518    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.021612    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.021689    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:56.060318    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 05:44:56.060388    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 05:44:56.079682    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 05:44:56.079759    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 05:44:56.100671    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 05:44:56.100755    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 05:44:56.120911    3700 provision.go:87] duration metric: took 220.512292ms to configureAuth
	I0816 05:44:56.120927    3700 buildroot.go:189] setting minikube options for container-runtime
	I0816 05:44:56.121094    3700 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:44:56.121108    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:56.121244    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.121333    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.121413    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121488    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.121577    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.121685    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.121810    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.121818    3700 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 05:44:56.183314    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 05:44:56.183328    3700 buildroot.go:70] root file system type: tmpfs
	I0816 05:44:56.183405    3700 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 05:44:56.183418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.183543    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.183624    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183720    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.183811    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.183942    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.184086    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.184135    3700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 05:44:56.259224    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 05:44:56.259247    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:56.259375    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:56.259477    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259561    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:56.259648    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:56.259767    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:56.259901    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:56.259914    3700 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 05:44:57.846578    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 05:44:57.846593    3700 machine.go:96] duration metric: took 13.151151754s to provisionDockerMachine
	I0816 05:44:57.846601    3700 start.go:293] postStartSetup for "ha-073000-m04" (driver="hyperkit")
	I0816 05:44:57.846608    3700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 05:44:57.846619    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.846827    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 05:44:57.846841    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.846963    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.847057    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.847190    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.847325    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.890251    3700 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 05:44:57.893714    3700 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 05:44:57.893725    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 05:44:57.893828    3700 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 05:44:57.894005    3700 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 05:44:57.894011    3700 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 05:44:57.894210    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 05:44:57.903672    3700 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 05:44:57.936540    3700 start.go:296] duration metric: took 89.932708ms for postStartSetup
	I0816 05:44:57.936562    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:57.936732    3700 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0816 05:44:57.936743    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:57.936825    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:57.936908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:57.936990    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:57.937072    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:57.974376    3700 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0816 05:44:57.974431    3700 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0816 05:44:58.026259    3700 fix.go:56] duration metric: took 13.470511319s for fixHost
	I0816 05:44:58.026289    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.026437    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.026567    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026661    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.026739    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.026870    3700 main.go:141] libmachine: Using SSH client type: native
	I0816 05:44:58.027046    3700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9d00ea0] 0x9d03c00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0816 05:44:58.027055    3700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 05:44:58.089267    3700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723812298.037026032
	
	I0816 05:44:58.089280    3700 fix.go:216] guest clock: 1723812298.037026032
	I0816 05:44:58.089285    3700 fix.go:229] Guest: 2024-08-16 05:44:58.037026032 -0700 PDT Remote: 2024-08-16 05:44:58.026278 -0700 PDT m=+113.498555850 (delta=10.748032ms)
	I0816 05:44:58.089296    3700 fix.go:200] guest clock delta is within tolerance: 10.748032ms
	I0816 05:44:58.089300    3700 start.go:83] releasing machines lock for "ha-073000-m04", held for 13.533577972s
	I0816 05:44:58.089315    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.089444    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:44:58.113019    3700 out.go:177] * Found network options:
	I0816 05:44:58.133803    3700 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0816 05:44:58.154869    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.154894    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.154908    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155418    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155540    3700 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:44:58.155619    3700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 05:44:58.155647    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	W0816 05:44:58.155674    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	W0816 05:44:58.155690    3700 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 05:44:58.155757    3700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 05:44:58.155778    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:44:58.155796    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155925    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:44:58.155946    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156056    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:44:58.156076    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156184    3700 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:44:58.156198    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:44:58.156285    3700 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	W0816 05:44:58.193631    3700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 05:44:58.193701    3700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 05:44:58.236070    3700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 05:44:58.236085    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.236153    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.252488    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 05:44:58.262662    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 05:44:58.272809    3700 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.272876    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 05:44:58.283088    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.293199    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 05:44:58.302692    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 05:44:58.312080    3700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 05:44:58.321436    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 05:44:58.330649    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 05:44:58.339785    3700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 05:44:58.349176    3700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 05:44:58.357543    3700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 05:44:58.365884    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.462788    3700 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 05:44:58.483641    3700 start.go:495] detecting cgroup driver to use...
	I0816 05:44:58.483717    3700 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 05:44:58.502138    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.514733    3700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 05:44:58.534512    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 05:44:58.547599    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.558372    3700 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 05:44:58.578053    3700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 05:44:58.588770    3700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 05:44:58.604147    3700 ssh_runner.go:195] Run: which cri-dockerd
	I0816 05:44:58.607001    3700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 05:44:58.614131    3700 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 05:44:58.627780    3700 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 05:44:58.724561    3700 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 05:44:58.838116    3700 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 05:44:58.838140    3700 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 05:44:58.852167    3700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 05:44:58.944841    3700 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 05:45:59.852967    3700 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.909307795s)
	I0816 05:45:59.853035    3700 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 05:45:59.887051    3700 out.go:201] 
	W0816 05:45:59.908317    3700 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 12:44:56 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.477961385Z" level=info msg="Starting up"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.478651123Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 12:44:56 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:56.479149818Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=492
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.497251014Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512736016Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512786960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512832906Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512843449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.512990846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513025418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513142091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513176878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513189848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513197982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513328837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.513514337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515123592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515162448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515278467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515313029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515424326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.515511733Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517455314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517544772Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517585141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517601510Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517612297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517713222Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.517933474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518033958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518069471Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518088650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518101306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518111033Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518119014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518128230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518155729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518197753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518209146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518217247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518232727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518242479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518257521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518270826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518280074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518288937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518296642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518305847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518314748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518324203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518386404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518396238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518404404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518414105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518428969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518437387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518445132Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518491204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518506443Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518514647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518522672Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518529245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518537689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518544653Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518899090Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.518957259Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519012111Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 12:44:56 ha-073000-m04 dockerd[492]: time="2024-08-16T12:44:56.519026933Z" level=info msg="containerd successfully booted in 0.022691s"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.498621326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.511032578Z" level=info msg="Loading containers: start."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.643404815Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.708639630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756823239Z" level=warning msg="error locating sandbox id 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd: sandbox 61b14996cbc418ae1ab56e9da08cf80c65d6d349d6af3af728a1b0abcd7f69cd not found"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.756925263Z" level=info msg="Loading containers: done."
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.763915655Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.764081581Z" level=info msg="Daemon has completed initialization"
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.785909245Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 12:44:57 ha-073000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 16 12:44:57 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:57.786078565Z" level=info msg="API listen on [::]:2376"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.921954679Z" level=info msg="Processing signal 'terminated'"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923118966Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923233559Z" level=info msg="Daemon shutdown complete"
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923326494Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 12:44:58 ha-073000-m04 dockerd[486]: time="2024-08-16T12:44:58.923341810Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 12:44:58 ha-073000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 12:44:59 ha-073000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 12:44:59 ha-073000-m04 dockerd[1163]: time="2024-08-16T12:44:59.962962742Z" level=info msg="Starting up"
	Aug 16 12:45:59 ha-073000-m04 dockerd[1163]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 12:45:59 ha-073000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 05:45:59.908450    3700 out.go:270] * 
	W0816 05:45:59.909700    3700 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 05:45:59.951019    3700 out.go:201] 
	
	
	==> Docker <==
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.299956506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300173351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300182269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.300373518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:42 ha-073000 dockerd[1130]: time="2024-08-16T12:44:42.302304016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263043998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263098724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263109067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:44:46 ha-073000 dockerd[1130]: time="2024-08-16T12:44:46.263358182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255557325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255603642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255615508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:09 ha-073000 dockerd[1130]: time="2024-08-16T12:45:09.255680038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250561832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250773743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.250784609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.251063565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.467903824Z" level=info msg="shim disconnected" id=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 namespace=moby
	Aug 16 12:45:12 ha-073000 dockerd[1123]: time="2024-08-16T12:45:12.468049001Z" level=info msg="ignoring event" container=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.468960050Z" level=warning msg="cleaning up after shim disconnected" id=8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488 namespace=moby
	Aug 16 12:45:12 ha-073000 dockerd[1130]: time="2024-08-16T12:45:12.469006000Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 12:46:38 ha-073000 dockerd[1130]: time="2024-08-16T12:46:38.245177465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 12:46:38 ha-073000 dockerd[1130]: time="2024-08-16T12:46:38.245249572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 12:46:38 ha-073000 dockerd[1130]: time="2024-08-16T12:46:38.245263807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 12:46:38 ha-073000 dockerd[1130]: time="2024-08-16T12:46:38.245734898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec64bcf3021a3       6e38f40d628db       38 seconds ago      Running             storage-provisioner       4                   4d9d98ef92415       storage-provisioner
	53efad29a380d       cbb01a7bd410d       2 minutes ago       Running             coredns                   2                   97e99c8f985cf       coredns-6f6b679f8f-2fdpw
	dde34f0f1905b       cbb01a7bd410d       2 minutes ago       Running             coredns                   2                   cbedc350d9a72       coredns-6f6b679f8f-vf22s
	d53d4035a10b6       12968670680f4       2 minutes ago       Running             kindnet-cni               2                   64a7fad2cec73       kindnet-6w49d
	ca60e2c666f71       ad83b2ca7b09e       2 minutes ago       Running             kube-proxy                2                   b53d07e491338       kube-proxy-6nsmz
	8da86bdda1be0       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       3                   4d9d98ef92415       storage-provisioner
	0d6c46a2a7e36       8c811b4aec35f       2 minutes ago       Running             busybox                   2                   5a113806a9083       busybox-7dff88458-tbh6p
	d3bc52584d24f       045733566833c       2 minutes ago       Running             kube-controller-manager   4                   ae78baa701d35       kube-controller-manager-ha-073000
	5261283416a26       38af8ddebf499       3 minutes ago       Running             kube-vip                  1                   8a3e2cb139422       kube-vip-ha-073000
	4132415e113f3       604f5db92eaa8       3 minutes ago       Running             kube-apiserver            2                   5b0df036eddf2       kube-apiserver-ha-073000
	35d4653c8cb24       1766f54c897f0       3 minutes ago       Running             kube-scheduler            2                   439518e74411c       kube-scheduler-ha-073000
	746a25d99f7e9       2e96e5913fc06       3 minutes ago       Running             etcd                      2                   01df760da9958       etcd-ha-073000
	6e9db99cb249f       045733566833c       3 minutes ago       Exited              kube-controller-manager   3                   ae78baa701d35       kube-controller-manager-ha-073000
	45a286f9bcbe0       8c811b4aec35f       6 minutes ago       Exited              busybox                   1                   917fa53aa567f       busybox-7dff88458-tbh6p
	4cea51d49ca8a       cbb01a7bd410d       6 minutes ago       Exited              coredns                   1                   da30f2a6f620a       coredns-6f6b679f8f-2fdpw
	ac45a09e68e6e       12968670680f4       6 minutes ago       Exited              kindnet-cni               1                   b7cba0c6730d7       kindnet-6w49d
	6bd9db004e0f2       cbb01a7bd410d       6 minutes ago       Exited              coredns                   1                   9723d60c28159       coredns-6f6b679f8f-vf22s
	9ac6acc1a0063       ad83b2ca7b09e       6 minutes ago       Exited              kube-proxy                1                   b73943b66f38c       kube-proxy-6nsmz
	817998dc223bd       38af8ddebf499       7 minutes ago       Exited              kube-vip                  0                   3aee62c916259       kube-vip-ha-073000
	f7dc3b77e3e36       1766f54c897f0       7 minutes ago       Exited              kube-scheduler            1                   a39f7babb7d55       kube-scheduler-ha-073000
	bbea06dccbfca       2e96e5913fc06       7 minutes ago       Exited              etcd                      1                   a744d07ec14bd       etcd-ha-073000
	2794b950d2a1a       604f5db92eaa8       7 minutes ago       Exited              kube-apiserver            1                   1b8fe978c9574       kube-apiserver-ha-073000
	
	
	==> coredns [4cea51d49ca8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48498 - 38677 "HINFO IN 4258714537102711440.5140164315290019176. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005829685s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1808017765]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30003ms):
	Trace[1808017765]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.953)
	Trace[1808017765]: [30.003041878s] [30.003041878s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1742264180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.953) (total time: 30001ms):
	Trace[1742264180]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.954)
	Trace[1742264180]: [30.001952334s] [30.001952334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1076876587]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30003ms):
	Trace[1076876587]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:41:27.953)
	Trace[1076876587]: [30.003826704s] [30.003826704s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [53efad29a380] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39872 - 38910 "HINFO IN 8344980917306972801.1301155251568300364. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073019455s
	
	
	==> coredns [6bd9db004e0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34749 - 50535 "HINFO IN 6826521007957410060.7380457420194179284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005638083s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2071701981]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30002ms):
	Trace[2071701981]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.953)
	Trace[2071701981]: [30.002664177s] [30.002664177s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[81888879]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.953) (total time: 30001ms):
	Trace[81888879]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.955)
	Trace[81888879]: [30.001793099s] [30.001793099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1240220066]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Aug-2024 12:40:57.952) (total time: 30002ms):
	Trace[1240220066]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:41:27.953)
	Trace[1240220066]: [30.002668968s] [30.002668968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dde34f0f1905] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40084 - 46500 "HINFO IN 4264016345420347209.7126907159756313777. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02255325s
	
	
	==> describe nodes <==
	Name:               ha-073000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T05_34_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:34:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:34:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:44:25 +0000   Fri, 16 Aug 2024 12:44:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-073000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e52eb9451e244b8aa696383c6e23553e
	  System UUID:                449f4e9a-0000-0000-9271-363ec4bdb253
	  Boot ID:                    eacb4432-039c-4561-b63c-a22e6109d42f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tbh6p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-6f6b679f8f-2fdpw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-6f6b679f8f-vf22s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-073000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-6w49d                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-073000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-073000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6nsmz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-073000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-073000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 6m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  NodeReady                12m                    kubelet          Node ha-073000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           8m8s                   node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  Starting                 3m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m30s (x8 over 3m30s)  kubelet          Node ha-073000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x8 over 3m30s)  kubelet          Node ha-073000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x7 over 3m30s)  kubelet          Node ha-073000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	  Normal  RegisteredNode           20s                    node-controller  Node ha-073000 event: Registered Node ha-073000 in Controller
	
	
	Name:               ha-073000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_35_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:44:14 +0000   Fri, 16 Aug 2024 12:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-073000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 74aa745bc3424232a48f3120c1bc5001
	  System UUID:                2ecb470f-0000-0000-9281-b78e2fd82941
	  Boot ID:                    c5f8c789-3b3b-40a8-beef-6bd94cba0d06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mq4rd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 etcd-ha-073000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-vjtpn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-073000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-073000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-c27jt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-073000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-073000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m1s                   kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 8m11s                  kube-proxy       
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 8m16s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 8m15s                  kubelet          Node ha-073000-m02 has been rebooted, boot id: cd5a6628-e2f5-4c6f-91f1-5ff24dad7ec8
	  Normal   NodeHasSufficientPID     8m15s (x2 over 8m16s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m15s (x2 over 8m16s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m15s (x2 over 8m16s)  kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m8s                   node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   NodeHasSufficientPID     6m53s (x7 over 6m53s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m53s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m53s (x8 over 6m53s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m53s (x8 over 6m53s)  kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           6m41s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           6m20s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           6m4s                   node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node ha-073000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node ha-073000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m59s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	  Normal   RegisteredNode           20s                    node-controller  Node ha-073000-m02 event: Registered Node ha-073000-m02 in Controller
	
	
	Name:               ha-073000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_37_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:37:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:42:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:42:25 +0000   Fri, 16 Aug 2024 12:44:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-073000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad7455d2901a48ca849fbe74152548be
	  System UUID:                f2db4ea2-0000-0000-9158-e93c928b5416
	  Boot ID:                    c501d95c-4cf4-48d1-a140-e26142bbc85e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8cgvv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kindnet-67bkr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m24s
	  kube-system                 kube-proxy-wcgdv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m17s                  kube-proxy       
	  Normal   Starting                 4m50s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    9m24s (x2 over 9m24s)  kubelet          Node ha-073000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m24s (x2 over 9m24s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m24s (x2 over 9m24s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m22s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           9m20s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           9m19s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeReady                9m1s                   kubelet          Node ha-073000-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m8s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           6m41s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           6m20s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           6m4s                   node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeNotReady             6m1s                   node-controller  Node ha-073000-m04 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     4m52s (x3 over 4m52s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 4m52s (x2 over 4m52s)  kubelet          Node ha-073000-m04 has been rebooted, boot id: c501d95c-4cf4-48d1-a140-e26142bbc85e
	  Normal   NodeHasSufficientMemory  4m52s (x3 over 4m52s)  kubelet          Node ha-073000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m52s (x3 over 4m52s)  kubelet          Node ha-073000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             4m52s                  kubelet          Node ha-073000-m04 status is now: NodeNotReady
	  Normal   NodeReady                4m52s                  kubelet          Node ha-073000-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m59s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	  Normal   NodeNotReady             2m19s                  node-controller  Node ha-073000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           20s                    node-controller  Node ha-073000-m04 event: Registered Node ha-073000-m04 in Controller
	
	
	Name:               ha-073000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-073000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=ha-073000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_46_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:46:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-073000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:47:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 12:47:07 +0000   Fri, 16 Aug 2024 12:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 12:47:07 +0000   Fri, 16 Aug 2024 12:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 12:47:07 +0000   Fri, 16 Aug 2024 12:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 12:47:07 +0000   Fri, 16 Aug 2024 12:47:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-073000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 36dea4fcc1d748fb890e480e1ef6412d
	  System UUID:                4176454c-0000-0000-ab6e-5089b18c9a42
	  Boot ID:                    25b3e507-68c2-4d6e-8935-83d0639dd957
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-073000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28s
	  kube-system                 kindnet-qzxtg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28s
	  kube-system                 kube-apiserver-ha-073000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-ha-073000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-dm9ds                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-ha-073000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-vip-ha-073000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node ha-073000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node ha-073000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node ha-073000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node ha-073000-m05 event: Registered Node ha-073000-m05 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-073000-m05 event: Registered Node ha-073000-m05 in Controller
	  Normal  RegisteredNode           20s                node-controller  Node ha-073000-m05 event: Registered Node ha-073000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.007992] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.682346] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006927] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.707235] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.270759] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.716408] systemd-fstab-generator[495]: Ignoring "noauto" option for root device
	[  +0.106851] systemd-fstab-generator[507]: Ignoring "noauto" option for root device
	[  +2.000923] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.251980] systemd-fstab-generator[1089]: Ignoring "noauto" option for root device
	[  +0.119628] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +0.114647] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +2.500095] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.050270] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.052123] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.110888] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.123146] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.430382] systemd-fstab-generator[1533]: Ignoring "noauto" option for root device
	[  +6.822975] kauditd_printk_skb: 110 callbacks suppressed
	[Aug16 12:44] kauditd_printk_skb: 40 callbacks suppressed
	[ +27.030866] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.057508] kauditd_printk_skb: 36 callbacks suppressed
	[Aug16 12:46] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [746a25d99f7e] <==
	{"level":"warn","ts":"2024-08-16T12:44:14.432877Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"330f9299269ea03a","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-16T12:44:14.432945Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"330f9299269ea03a","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-16T12:46:49.510321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(3679320607060566074 13314548521573537860) learners=(14279937017489711271)"}
	{"level":"info","ts":"2024-08-16T12:46:49.510698Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"c62c870f1e6ce0a7","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-08-16T12:46:49.510740Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.511020Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.511785Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.511826Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-08-16T12:46:49.512068Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.512102Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.512199Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:49.512211Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"warn","ts":"2024-08-16T12:46:49.529608Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.169.0.9:48730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-16T12:46:50.553125Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"c62c870f1e6ce0a7","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-08-16T12:46:51.074888Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:51.082427Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:51.086597Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:51.114518Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c62c870f1e6ce0a7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-16T12:46:51.114716Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"info","ts":"2024-08-16T12:46:51.114903Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"c62c870f1e6ce0a7","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-16T12:46:51.114939Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c62c870f1e6ce0a7"}
	{"level":"warn","ts":"2024-08-16T12:46:51.546999Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"c62c870f1e6ce0a7","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-08-16T12:46:52.047896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(3679320607060566074 13314548521573537860 14279937017489711271)"}
	{"level":"info","ts":"2024-08-16T12:46:52.048004Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-08-16T12:46:52.048021Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c62c870f1e6ce0a7"}
	
	
	==> etcd [bbea06dccbfc] <==
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630077Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:51.632815Z","time spent":"4.997260741s","remote":"127.0.0.1:36324","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.901369422s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T12:42:56.630156Z","caller":"traceutil/trace.go:171","msg":"trace[1178344434] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"5.901383392s","start":"2024-08-16T12:42:50.728768Z","end":"2024-08-16T12:42:56.630152Z","steps":["trace[1178344434] 'agreement among raft nodes before linearized reading'  (duration: 5.901369776s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:42:56.630168Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:50.728645Z","time spent":"5.901518585s","remote":"127.0.0.1:36182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.630211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.774725657s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-16T12:42:56.630221Z","caller":"traceutil/trace.go:171","msg":"trace[1675355472] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.774736143s","start":"2024-08-16T12:42:54.855482Z","end":"2024-08-16T12:42:56.630218Z","steps":["trace[1675355472] 'agreement among raft nodes before linearized reading'  (duration: 1.774725523s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-16T12:42:56.630230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-16T12:42:54.855466Z","time spent":"1.774762152s","remote":"127.0.0.1:36326","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/16 12:42:56 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-16T12:42:56.677616Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:42:56.677932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T12:42:56.680885Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-16T12:42:56.681006Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681017Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681029Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681092Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681114Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681156Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.681199Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"330f9299269ea03a"}
	{"level":"info","ts":"2024-08-16T12:42:56.684250Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T12:42:56.684317Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-16T12:42:56.684324Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-073000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 12:47:18 up 4 min,  0 users,  load average: 0.29, 0.16, 0.06
	Linux ha-073000 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ac45a09e68e6] <==
	I0816 12:42:18.785854       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:18.785985       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0816 12:42:18.786013       1 main.go:322] Node ha-073000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:42:18.786121       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:18.786181       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:28.779187       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:28.779366       1 main.go:299] handling current node
	I0816 12:42:28.779397       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:28.779418       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:28.779624       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0816 12:42:28.779796       1 main.go:322] Node ha-073000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:42:28.779915       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:28.780031       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:38.780023       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:38.780075       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:38.780197       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:38.780225       1 main.go:299] handling current node
	I0816 12:42:38.780252       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:38.780276       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:48.780124       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:42:48.780274       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:42:48.780772       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:42:48.780891       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:42:48.781100       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:42:48.781255       1 main.go:299] handling current node
	
	
	==> kindnet [d53d4035a10b] <==
	I0816 12:46:57.494693       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:46:57.494732       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:46:57.495023       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0816 12:46:57.495053       1 main.go:322] Node ha-073000-m05 has CIDR [10.244.2.0/24] 
	I0816 12:46:57.495128       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0} 
	I0816 12:46:57.495376       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:46:57.495555       1 main.go:299] handling current node
	I0816 12:46:57.495586       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:46:57.495592       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:47:07.494393       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:47:07.494451       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:47:07.494648       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0816 12:47:07.494677       1 main.go:322] Node ha-073000-m05 has CIDR [10.244.2.0/24] 
	I0816 12:47:07.494778       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:47:07.494808       1 main.go:299] handling current node
	I0816 12:47:07.494817       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:47:07.494821       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:47:17.501349       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0816 12:47:17.501388       1 main.go:299] handling current node
	I0816 12:47:17.501398       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0816 12:47:17.501403       1 main.go:322] Node ha-073000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:47:17.501507       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0816 12:47:17.501534       1 main.go:322] Node ha-073000-m04 has CIDR [10.244.3.0/24] 
	I0816 12:47:17.501584       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0816 12:47:17.501611       1 main.go:322] Node ha-073000-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2794b950d2a1] <==
	W0816 12:42:56.654442       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654464       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654486       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654519       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.654574       1 controller.go:195] "Failed to update lease" err="rpc error: code = Unknown desc = malformed header: missing HTTP content-type"
	W0816 12:42:56.654759       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654785       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654809       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654833       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654856       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654881       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.654910       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.655805       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0816 12:42:56.655997       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0816 12:42:56.656363       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656471       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656723       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656752       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656773       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.656793       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0816 12:42:56.656860       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0816 12:42:56.659986       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.660258       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:42:56.660291       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0816 12:42:56.693611       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [4132415e113f] <==
	I0816 12:44:14.239627       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0816 12:44:14.261469       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0816 12:44:14.333789       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 12:44:14.337352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 12:44:14.337399       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 12:44:14.338838       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 12:44:14.338867       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 12:44:14.339105       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 12:44:14.339594       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 12:44:14.339757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 12:44:14.346222       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 12:44:14.351290       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 12:44:14.351356       1 policy_source.go:224] refreshing policies
	I0816 12:44:14.362299       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 12:44:14.362365       1 aggregator.go:171] initial CRD sync complete...
	I0816 12:44:14.362371       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 12:44:14.362420       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 12:44:14.362476       1 cache.go:39] Caches are synced for autoregister controller
	W0816 12:44:14.363142       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0816 12:44:14.364917       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 12:44:14.371348       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0816 12:44:14.374126       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0816 12:44:14.421402       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 12:44:15.239815       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 12:44:15.584175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [6e9db99cb249] <==
	I0816 12:43:54.709568       1 serving.go:386] Generated self-signed cert in-memory
	I0816 12:43:55.294404       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0816 12:43:55.294436       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:43:55.301712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0816 12:43:55.302087       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 12:43:55.302339       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:43:55.302387       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0816 12:44:15.306064       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d3bc52584d24] <==
	I0816 12:45:12.904461       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="165.928µs"
	E0816 12:46:49.252100       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-bhqmd failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-bhqmd\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0816 12:46:49.403206       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-073000-m05\" does not exist"
	I0816 12:46:49.414477       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-073000-m05" podCIDRs=["10.244.2.0/24"]
	I0816 12:46:49.414522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:49.414546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:49.433934       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:49.461220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:50.836385       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-073000-m05"
	I0816 12:46:50.898141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:52.186340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:52.808044       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:52.907082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:53.710870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:53.805128       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:57.089907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:46:57.105196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:46:57.187682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m04"
	I0816 12:46:59.633984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:00.976858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:03.897941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:07.286158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:07.941896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:07.949963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	I0816 12:47:08.730962       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-073000-m05"
	
	
	==> kube-proxy [9ac6acc1a006] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:40:58.119214       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:40:58.140964       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 12:40:58.141059       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:40:58.183215       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:40:58.183283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:40:58.183302       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:40:58.187885       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:40:58.189155       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:40:58.189310       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:40:58.194403       1 config.go:197] "Starting service config controller"
	I0816 12:40:58.194925       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:40:58.195375       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:40:58.195405       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:40:58.198342       1 config.go:326] "Starting node config controller"
	I0816 12:40:58.198371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:40:58.295736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:40:58.295786       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:40:58.298813       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ca60e2c666f7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:44:42.606451       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:44:42.619729       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0816 12:44:42.619898       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:44:42.653091       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:44:42.653110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:44:42.653134       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:44:42.655923       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:44:42.656347       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:44:42.656397       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:44:42.658856       1 config.go:197] "Starting service config controller"
	I0816 12:44:42.659229       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:44:42.659453       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:44:42.659543       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:44:42.660619       1 config.go:326] "Starting node config controller"
	I0816 12:44:42.660792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:44:42.759577       1 shared_informer.go:320] Caches are synced for service config
	I0816 12:44:42.759915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:44:42.761512       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35d4653c8cb2] <==
	W0816 12:44:05.748496       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 12:44:05.748502       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 12:44:14.279151       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 12:44:14.279189       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:44:14.280821       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 12:44:14.284117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 12:44:14.284473       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:44:14.287015       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:44:14.385344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 12:46:49.441687       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tx5rq\": pod kindnet-tx5rq is already assigned to node \"ha-073000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-tx5rq" node="ha-073000-m05"
	E0816 12:46:49.441827       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b8b29979-4538-429a-9603-719701403308(kube-system/kindnet-tx5rq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tx5rq"
	E0816 12:46:49.441859       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tx5rq\": pod kindnet-tx5rq is already assigned to node \"ha-073000-m05\"" pod="kube-system/kindnet-tx5rq"
	I0816 12:46:49.441939       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tx5rq" node="ha-073000-m05"
	E0816 12:46:49.445488       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dm9ds\": pod kube-proxy-dm9ds is already assigned to node \"ha-073000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dm9ds" node="ha-073000-m05"
	E0816 12:46:49.447914       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c088fc8f-518a-4e63-b4d5-6f2a9fcc6b39(kube-system/kube-proxy-dm9ds) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dm9ds"
	E0816 12:46:49.448664       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dm9ds\": pod kube-proxy-dm9ds is already assigned to node \"ha-073000-m05\"" pod="kube-system/kube-proxy-dm9ds"
	I0816 12:46:49.448753       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dm9ds" node="ha-073000-m05"
	E0816 12:46:49.463776       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzxtg\": pod kindnet-qzxtg is already assigned to node \"ha-073000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzxtg" node="ha-073000-m05"
	E0816 12:46:49.463904       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzxtg\": pod kindnet-qzxtg is already assigned to node \"ha-073000-m05\"" pod="kube-system/kindnet-qzxtg"
	E0816 12:46:49.464093       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bgds6\": pod kube-proxy-bgds6 is already assigned to node \"ha-073000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bgds6" node="ha-073000-m05"
	E0816 12:46:49.464229       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bgds6\": pod kube-proxy-bgds6 is already assigned to node \"ha-073000-m05\"" pod="kube-system/kube-proxy-bgds6"
	E0816 12:46:50.514036       1 framework.go:1305] "Plugin Failed" err="pods \"kube-proxy-bgds6\" not found" plugin="DefaultBinder" pod="kube-system/kube-proxy-bgds6" node="ha-073000-m05"
	E0816 12:46:50.514225       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": pods \"kube-proxy-bgds6\" not found" pod="kube-system/kube-proxy-bgds6"
	I0816 12:46:50.514376       1 schedule_one.go:1064] "Pod doesn't exist in informer cache" pod="kube-system/kube-proxy-bgds6" err="pod \"kube-proxy-bgds6\" not found"
	E0816 12:46:50.517781       1 schedule_one.go:1106] "Error updating pod" err="pods \"kube-proxy-bgds6\" not found" pod="kube-system/kube-proxy-bgds6"
	
	
	==> kube-scheduler [f7dc3b77e3e3] <==
	I0816 12:40:13.671703       1 serving.go:386] Generated self-signed cert in-memory
	W0816 12:40:24.123596       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0816 12:40:24.123639       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 12:40:24.123645       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 12:40:33.202244       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 12:40:33.203684       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:40:33.206420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 12:40:33.206746       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 12:40:33.207443       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:40:33.207300       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:40:33.311876       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 12:42:32.624416       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8cgvv\": pod busybox-7dff88458-8cgvv is already assigned to node \"ha-073000-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8cgvv" node="ha-073000-m04"
	E0816 12:42:32.627510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5d52700b-2644-418d-ab40-6fc48f247d6f(default/busybox-7dff88458-8cgvv) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8cgvv"
	E0816 12:42:32.627623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8cgvv\": pod busybox-7dff88458-8cgvv is already assigned to node \"ha-073000-m04\"" pod="default/busybox-7dff88458-8cgvv"
	I0816 12:42:32.627739       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8cgvv" node="ha-073000-m04"
	E0816 12:42:56.710485       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 16 12:45:12 ha-073000 kubelet[1540]: I0816 12:45:12.862725    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:12 ha-073000 kubelet[1540]: E0816 12:45:12.862845    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:24 ha-073000 kubelet[1540]: I0816 12:45:24.202903    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:24 ha-073000 kubelet[1540]: E0816 12:45:24.203175    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:39 ha-073000 kubelet[1540]: I0816 12:45:39.202752    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:39 ha-073000 kubelet[1540]: E0816 12:45:39.202899    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:45:47 ha-073000 kubelet[1540]: E0816 12:45:47.224411    1540 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:45:47 ha-073000 kubelet[1540]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:45:47 ha-073000 kubelet[1540]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 12:45:50 ha-073000 kubelet[1540]: I0816 12:45:50.203594    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:45:50 ha-073000 kubelet[1540]: E0816 12:45:50.204150    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:46:01 ha-073000 kubelet[1540]: I0816 12:46:01.203069    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:46:01 ha-073000 kubelet[1540]: E0816 12:46:01.203181    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:46:13 ha-073000 kubelet[1540]: I0816 12:46:13.202369    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:46:13 ha-073000 kubelet[1540]: E0816 12:46:13.202486    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:46:24 ha-073000 kubelet[1540]: I0816 12:46:24.202426    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:46:24 ha-073000 kubelet[1540]: E0816 12:46:24.202565    1540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6761bd0b-a562-4194-84a3-81ca426d6708)\"" pod="kube-system/storage-provisioner" podUID="6761bd0b-a562-4194-84a3-81ca426d6708"
	Aug 16 12:46:38 ha-073000 kubelet[1540]: I0816 12:46:38.203218    1540 scope.go:117] "RemoveContainer" containerID="8da86bdda1be0d241c059ae787d2c7beb934ed1b5293e87c318938fbaf4b0488"
	Aug 16 12:46:47 ha-073000 kubelet[1540]: E0816 12:46:47.225359    1540 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 12:46:47 ha-073000 kubelet[1540]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 12:46:47 ha-073000 kubelet[1540]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 12:46:47 ha-073000 kubelet[1540]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 12:46:47 ha-073000 kubelet[1540]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-073000 -n ha-073000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-073000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (75.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0816 05:52:52.926858    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:53:27.976629    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.785825007s)

                                                
                                                
-- stdout --
	* [mount-start-1-281000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-281000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-281000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:9a:a4:dd:30:ec
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-281000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:c7:bd:37:7c:84
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:c7:bd:37:7c:84
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-281000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-281000 -n mount-start-1-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-281000 -n mount-start-1-281000: exit status 7 (80.225673ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 05:53:48.070361    4064 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 05:53:48.070381    4064 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-281000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (138.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0816 06:01:31.053343    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (2m14.387774829s)

                                                
                                                
-- stdout --
	* [multinode-120000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-120000" primary control-plane node in "multinode-120000" cluster
	* Restarting existing hyperkit VM for "multinode-120000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-120000-m02" worker node in "multinode-120000" cluster
	* Restarting existing hyperkit VM for "multinode-120000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.14
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:00:00.437269    4495 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:00:00.437534    4495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.437540    4495 out.go:358] Setting ErrFile to fd 2...
	I0816 06:00:00.437546    4495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.437715    4495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:00:00.439130    4495 out.go:352] Setting JSON to false
	I0816 06:00:00.461246    4495 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2978,"bootTime":1723810222,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:00:00.461338    4495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:00:00.484085    4495 out.go:177] * [multinode-120000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:00:00.526534    4495 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:00:00.526607    4495 notify.go:220] Checking for updates...
	I0816 06:00:00.557254    4495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:00.578492    4495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:00:00.600661    4495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:00:00.648199    4495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:00:00.669461    4495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:00:00.691247    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:00.691911    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.692016    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.701577    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53279
	I0816 06:00:00.701934    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.702350    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.702361    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.702577    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.702697    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.702916    4495 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:00:00.703168    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.703191    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.711617    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53281
	I0816 06:00:00.711932    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.712287    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.712302    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.712518    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.712671    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.741305    4495 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 06:00:00.783251    4495 start.go:297] selected driver: hyperkit
	I0816 06:00:00.783283    4495 start.go:901] validating driver "hyperkit" against &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:00.783519    4495 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:00:00.783713    4495 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:00:00.783913    4495 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:00:00.793394    4495 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:00:00.797095    4495 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.797118    4495 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:00:00.799739    4495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:00:00.799780    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:00.799788    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:00.799858    4495 start.go:340] cluster config:
	{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plu
gin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:00.799970    4495 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:00:00.821048    4495 out.go:177] * Starting "multinode-120000" primary control-plane node in "multinode-120000" cluster
	I0816 06:00:00.842239    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:00.842333    4495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:00:00.842355    4495 cache.go:56] Caching tarball of preloaded images
	I0816 06:00:00.842583    4495 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:00:00.842601    4495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:00:00.842770    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:00.843697    4495 start.go:360] acquireMachinesLock for multinode-120000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:00:00.843818    4495 start.go:364] duration metric: took 95.678µs to acquireMachinesLock for "multinode-120000"
	I0816 06:00:00.843855    4495 start.go:96] Skipping create...Using existing machine configuration
	I0816 06:00:00.843873    4495 fix.go:54] fixHost starting: 
	I0816 06:00:00.844306    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.844337    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.853481    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53283
	I0816 06:00:00.853843    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.854209    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.854222    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.854433    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.854578    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.854677    4495 main.go:141] libmachine: (multinode-120000) Calling .GetState
	I0816 06:00:00.854759    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.854838    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4436
	I0816 06:00:00.855754    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid 4436 missing from process table
	I0816 06:00:00.855779    4495 fix.go:112] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	I0816 06:00:00.855796    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	W0816 06:00:00.855889    4495 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 06:00:00.877085    4495 out.go:177] * Restarting existing hyperkit VM for "multinode-120000" ...
	I0816 06:00:00.897869    4495 main.go:141] libmachine: (multinode-120000) Calling .Start
	I0816 06:00:00.898015    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.898040    4495 main.go:141] libmachine: (multinode-120000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid
	I0816 06:00:00.898985    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid 4436 missing from process table
	I0816 06:00:00.899002    4495 main.go:141] libmachine: (multinode-120000) DBG | pid 4436 is in state "Stopped"
	I0816 06:00:00.899011    4495 main.go:141] libmachine: (multinode-120000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid...
	I0816 06:00:00.899178    4495 main.go:141] libmachine: (multinode-120000) DBG | Using UUID 3c9151c1-070c-42c3-931c-22df86688b90
	I0816 06:00:01.008020    4495 main.go:141] libmachine: (multinode-120000) DBG | Generated MAC fa:4b:15:6b:d9:84
	I0816 06:00:01.008043    4495 main.go:141] libmachine: (multinode-120000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000
	I0816 06:00:01.008169    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3c9151c1-070c-42c3-931c-22df86688b90", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a68a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0816 06:00:01.008196    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3c9151c1-070c-42c3-931c-22df86688b90", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a68a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0816 06:00:01.008239    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3c9151c1-070c-42c3-931c-22df86688b90", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/multinode-120000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage,/Users/jenkins/minikube-integration/1942
3-1009/.minikube/machines/multinode-120000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"}
	I0816 06:00:01.008264    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3c9151c1-070c-42c3-931c-22df86688b90 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/multinode-120000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"
	I0816 06:00:01.008274    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:00:01.009750    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Pid is 4510
	I0816 06:00:01.010226    4495 main.go:141] libmachine: (multinode-120000) DBG | Attempt 0
	I0816 06:00:01.010247    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:01.010362    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4510
	I0816 06:00:01.012057    4495 main.go:141] libmachine: (multinode-120000) DBG | Searching for fa:4b:15:6b:d9:84 in /var/db/dhcpd_leases ...
	I0816 06:00:01.012159    4495 main.go:141] libmachine: (multinode-120000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0816 06:00:01.012173    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:00:01.012186    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66c09e76}
	I0816 06:00:01.012210    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09e4c}
	I0816 06:00:01.012220    4495 main.go:141] libmachine: (multinode-120000) DBG | Found match: fa:4b:15:6b:d9:84
	I0816 06:00:01.012227    4495 main.go:141] libmachine: (multinode-120000) DBG | IP: 192.169.0.14
	I0816 06:00:01.012281    4495 main.go:141] libmachine: (multinode-120000) Calling .GetConfigRaw
	I0816 06:00:01.012949    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:01.013136    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:01.013567    4495 machine.go:93] provisionDockerMachine start ...
	I0816 06:00:01.013578    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:01.013725    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:01.013820    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:01.013903    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:01.013995    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:01.014112    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:01.014241    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:01.014512    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:01.014525    4495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 06:00:01.017464    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:00:01.068185    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:00:01.068876    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:01.068897    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:01.068911    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:01.068919    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:01.454021    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:00:01.454037    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:00:01.568953    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:01.568980    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:01.568990    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:01.568997    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:01.569812    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:00:01.569825    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:00:07.168469    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 06:00:07.168487    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 06:00:07.168497    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 06:00:07.193237    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 06:00:12.079930    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 06:00:12.079944    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.080122    4495 buildroot.go:166] provisioning hostname "multinode-120000"
	I0816 06:00:12.080134    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.080229    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.080326    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.080409    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.080501    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.080601    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.080741    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.080956    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.080964    4495 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-120000 && echo "multinode-120000" | sudo tee /etc/hostname
	I0816 06:00:12.148519    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-120000
	
	I0816 06:00:12.148538    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.148669    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.148753    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.148849    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.148961    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.149085    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.149226    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.149238    4495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-120000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-120000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-120000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 06:00:12.211112    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:00:12.211132    4495 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 06:00:12.211152    4495 buildroot.go:174] setting up certificates
	I0816 06:00:12.211162    4495 provision.go:84] configureAuth start
	I0816 06:00:12.211169    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.211306    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:12.211422    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.211551    4495 provision.go:143] copyHostCerts
	I0816 06:00:12.211586    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:00:12.211655    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 06:00:12.211663    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:00:12.211800    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 06:00:12.211998    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:00:12.212038    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 06:00:12.212043    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:00:12.212127    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 06:00:12.212273    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:00:12.212325    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 06:00:12.212330    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:00:12.212405    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 06:00:12.212550    4495 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.multinode-120000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-120000]
	I0816 06:00:12.269903    4495 provision.go:177] copyRemoteCerts
	I0816 06:00:12.269960    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 06:00:12.269974    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.270075    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.270176    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.270269    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.270367    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:12.306571    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 06:00:12.306648    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 06:00:12.326772    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 06:00:12.326832    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 06:00:12.347062    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 06:00:12.347120    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 06:00:12.366749    4495 provision.go:87] duration metric: took 155.576654ms to configureAuth
	I0816 06:00:12.366761    4495 buildroot.go:189] setting minikube options for container-runtime
	I0816 06:00:12.366918    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:12.366931    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:12.367058    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.367148    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.367235    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.367325    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.367420    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.367536    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.367659    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.367666    4495 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 06:00:12.425224    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 06:00:12.425241    4495 buildroot.go:70] root file system type: tmpfs
	I0816 06:00:12.425311    4495 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 06:00:12.425325    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.425467    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.425560    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.425665    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.425755    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.425903    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.426047    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.426095    4495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 06:00:12.493904    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 06:00:12.493926    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.494059    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.494132    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.494215    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.494295    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.494395    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.494544    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.494556    4495 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 06:00:14.152169    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 06:00:14.152193    4495 machine.go:96] duration metric: took 13.138876329s to provisionDockerMachine
	I0816 06:00:14.152206    4495 start.go:293] postStartSetup for "multinode-120000" (driver="hyperkit")
	I0816 06:00:14.152214    4495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 06:00:14.152227    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.152413    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 06:00:14.152425    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.152521    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.152608    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.152712    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.152802    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.198445    4495 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 06:00:14.202279    4495 command_runner.go:130] > NAME=Buildroot
	I0816 06:00:14.202290    4495 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 06:00:14.202296    4495 command_runner.go:130] > ID=buildroot
	I0816 06:00:14.202301    4495 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 06:00:14.202308    4495 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 06:00:14.202366    4495 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 06:00:14.202381    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 06:00:14.202494    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 06:00:14.202686    4495 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 06:00:14.202692    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 06:00:14.202900    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 06:00:14.213199    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:00:14.244608    4495 start.go:296] duration metric: took 92.394783ms for postStartSetup
	I0816 06:00:14.244634    4495 fix.go:56] duration metric: took 13.401033545s for fixHost
	I0816 06:00:14.244647    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.244776    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.244886    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.244969    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.245059    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.245183    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:14.245322    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:14.245329    4495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 06:00:14.304249    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813214.442767633
	
	I0816 06:00:14.304262    4495 fix.go:216] guest clock: 1723813214.442767633
	I0816 06:00:14.304268    4495 fix.go:229] Guest: 2024-08-16 06:00:14.442767633 -0700 PDT Remote: 2024-08-16 06:00:14.244637 -0700 PDT m=+13.842835509 (delta=198.130633ms)
	I0816 06:00:14.304285    4495 fix.go:200] guest clock delta is within tolerance: 198.130633ms
	I0816 06:00:14.304290    4495 start.go:83] releasing machines lock for "multinode-120000", held for 13.460725269s
	I0816 06:00:14.304308    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.304440    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:14.304545    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.304904    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.305007    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.305080    4495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 06:00:14.305120    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.305151    4495 ssh_runner.go:195] Run: cat /version.json
	I0816 06:00:14.305162    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.305236    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.305272    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.305347    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.305368    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.305450    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.305467    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.305538    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.305568    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.338164    4495 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0816 06:00:14.338478    4495 ssh_runner.go:195] Run: systemctl --version
	I0816 06:00:14.380328    4495 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 06:00:14.381111    4495 command_runner.go:130] > systemd 252 (252)
	I0816 06:00:14.381147    4495 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 06:00:14.381279    4495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 06:00:14.386544    4495 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 06:00:14.386566    4495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 06:00:14.386605    4495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 06:00:14.399180    4495 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0816 06:00:14.399210    4495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 06:00:14.399216    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:00:14.399316    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:00:14.414316    4495 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0816 06:00:14.414565    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 06:00:14.423600    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 06:00:14.432546    4495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 06:00:14.432587    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 06:00:14.441434    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:00:14.450288    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 06:00:14.458973    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:00:14.467883    4495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 06:00:14.476947    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 06:00:14.485850    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 06:00:14.494622    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 06:00:14.503560    4495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 06:00:14.511413    4495 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 06:00:14.511568    4495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 06:00:14.519551    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:14.617061    4495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 06:00:14.636046    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:00:14.636127    4495 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 06:00:14.649044    4495 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0816 06:00:14.649494    4495 command_runner.go:130] > [Unit]
	I0816 06:00:14.649504    4495 command_runner.go:130] > Description=Docker Application Container Engine
	I0816 06:00:14.649508    4495 command_runner.go:130] > Documentation=https://docs.docker.com
	I0816 06:00:14.649516    4495 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0816 06:00:14.649523    4495 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0816 06:00:14.649535    4495 command_runner.go:130] > StartLimitBurst=3
	I0816 06:00:14.649542    4495 command_runner.go:130] > StartLimitIntervalSec=60
	I0816 06:00:14.649546    4495 command_runner.go:130] > [Service]
	I0816 06:00:14.649550    4495 command_runner.go:130] > Type=notify
	I0816 06:00:14.649554    4495 command_runner.go:130] > Restart=on-failure
	I0816 06:00:14.649565    4495 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0816 06:00:14.649581    4495 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0816 06:00:14.649588    4495 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0816 06:00:14.649594    4495 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0816 06:00:14.649600    4495 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0816 06:00:14.649606    4495 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0816 06:00:14.649612    4495 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0816 06:00:14.649622    4495 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0816 06:00:14.649628    4495 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0816 06:00:14.649633    4495 command_runner.go:130] > ExecStart=
	I0816 06:00:14.649645    4495 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0816 06:00:14.649650    4495 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0816 06:00:14.649656    4495 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0816 06:00:14.649662    4495 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0816 06:00:14.649666    4495 command_runner.go:130] > LimitNOFILE=infinity
	I0816 06:00:14.649669    4495 command_runner.go:130] > LimitNPROC=infinity
	I0816 06:00:14.649672    4495 command_runner.go:130] > LimitCORE=infinity
	I0816 06:00:14.649695    4495 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0816 06:00:14.649703    4495 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0816 06:00:14.649712    4495 command_runner.go:130] > TasksMax=infinity
	I0816 06:00:14.649716    4495 command_runner.go:130] > TimeoutStartSec=0
	I0816 06:00:14.649727    4495 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0816 06:00:14.649731    4495 command_runner.go:130] > Delegate=yes
	I0816 06:00:14.649754    4495 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0816 06:00:14.649761    4495 command_runner.go:130] > KillMode=process
	I0816 06:00:14.649769    4495 command_runner.go:130] > [Install]
	I0816 06:00:14.649781    4495 command_runner.go:130] > WantedBy=multi-user.target
	I0816 06:00:14.649850    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:00:14.663052    4495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 06:00:14.677177    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:00:14.688371    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:00:14.699288    4495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 06:00:14.723884    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:00:14.735509    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:00:14.750493    4495 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0816 06:00:14.750755    4495 ssh_runner.go:195] Run: which cri-dockerd
	I0816 06:00:14.753530    4495 command_runner.go:130] > /usr/bin/cri-dockerd
	I0816 06:00:14.753632    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 06:00:14.761690    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 06:00:14.774974    4495 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 06:00:14.873309    4495 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 06:00:14.971003    4495 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 06:00:14.971078    4495 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 06:00:14.986615    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:15.084813    4495 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 06:00:17.427732    4495 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.342944904s)
	I0816 06:00:17.427791    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 06:00:17.438062    4495 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 06:00:17.450633    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 06:00:17.461045    4495 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 06:00:17.564144    4495 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 06:00:17.663550    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:17.760162    4495 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 06:00:17.774480    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 06:00:17.785611    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:17.890999    4495 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 06:00:17.946400    4495 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 06:00:17.946480    4495 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 06:00:17.950787    4495 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0816 06:00:17.950802    4495 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 06:00:17.950807    4495 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0816 06:00:17.950823    4495 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0816 06:00:17.950834    4495 command_runner.go:130] > Access: 2024-08-16 13:00:18.043272298 +0000
	I0816 06:00:17.950847    4495 command_runner.go:130] > Modify: 2024-08-16 13:00:18.043272298 +0000
	I0816 06:00:17.950852    4495 command_runner.go:130] > Change: 2024-08-16 13:00:18.044272176 +0000
	I0816 06:00:17.950856    4495 command_runner.go:130] >  Birth: -
	I0816 06:00:17.951036    4495 start.go:563] Will wait 60s for crictl version
	I0816 06:00:17.951085    4495 ssh_runner.go:195] Run: which crictl
	I0816 06:00:17.953851    4495 command_runner.go:130] > /usr/bin/crictl
	I0816 06:00:17.954463    4495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 06:00:17.979279    4495 command_runner.go:130] > Version:  0.1.0
	I0816 06:00:17.979292    4495 command_runner.go:130] > RuntimeName:  docker
	I0816 06:00:17.979296    4495 command_runner.go:130] > RuntimeVersion:  27.1.2
	I0816 06:00:17.979299    4495 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 06:00:17.980280    4495 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 06:00:17.980344    4495 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 06:00:17.995839    4495 command_runner.go:130] > 27.1.2
	I0816 06:00:17.996678    4495 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 06:00:18.012170    4495 command_runner.go:130] > 27.1.2
	I0816 06:00:18.033519    4495 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 06:00:18.033565    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:18.033913    4495 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 06:00:18.038461    4495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 06:00:18.048016    4495 kubeadm.go:883] updating cluster {Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 06:00:18.048115    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:18.048172    4495 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 06:00:18.061303    4495 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0816 06:00:18.061317    4495 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 06:00:18.061322    4495 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0816 06:00:18.061336    4495 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0816 06:00:18.061346    4495 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0816 06:00:18.061349    4495 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0816 06:00:18.061353    4495 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0816 06:00:18.061357    4495 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0816 06:00:18.061366    4495 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 06:00:18.061370    4495 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0816 06:00:18.062136    4495 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 06:00:18.062144    4495 docker.go:615] Images already preloaded, skipping extraction
	I0816 06:00:18.062221    4495 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 06:00:18.074523    4495 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0816 06:00:18.074536    4495 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 06:00:18.074540    4495 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0816 06:00:18.074544    4495 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0816 06:00:18.074548    4495 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0816 06:00:18.074551    4495 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0816 06:00:18.074555    4495 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0816 06:00:18.074559    4495 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0816 06:00:18.074564    4495 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 06:00:18.074568    4495 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0816 06:00:18.075367    4495 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 06:00:18.075385    4495 cache_images.go:84] Images are preloaded, skipping loading
	I0816 06:00:18.075396    4495 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.0 docker true true} ...
	I0816 06:00:18.075472    4495 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-120000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 06:00:18.075537    4495 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 06:00:18.113175    4495 command_runner.go:130] > cgroupfs
	I0816 06:00:18.113266    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:18.113275    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:18.113302    4495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 06:00:18.113319    4495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-120000 NodeName:multinode-120000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 06:00:18.113408    4495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-120000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 06:00:18.113469    4495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 06:00:18.121038    4495 command_runner.go:130] > kubeadm
	I0816 06:00:18.121048    4495 command_runner.go:130] > kubectl
	I0816 06:00:18.121052    4495 command_runner.go:130] > kubelet
	I0816 06:00:18.121093    4495 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 06:00:18.121136    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 06:00:18.128283    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0816 06:00:18.141829    4495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 06:00:18.155424    4495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 06:00:18.168960    4495 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0816 06:00:18.171819    4495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 06:00:18.181142    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:18.274536    4495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 06:00:18.289156    4495 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000 for IP: 192.169.0.14
	I0816 06:00:18.289168    4495 certs.go:194] generating shared ca certs ...
	I0816 06:00:18.289180    4495 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:18.289367    4495 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 06:00:18.289439    4495 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 06:00:18.289449    4495 certs.go:256] generating profile certs ...
	I0816 06:00:18.289566    4495 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.key
	I0816 06:00:18.289649    4495 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key.70a1c6a2
	I0816 06:00:18.289720    4495 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key
	I0816 06:00:18.289727    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 06:00:18.289752    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 06:00:18.289771    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 06:00:18.289796    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 06:00:18.289821    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 06:00:18.289854    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 06:00:18.289882    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 06:00:18.289901    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 06:00:18.290009    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 06:00:18.290056    4495 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 06:00:18.290065    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 06:00:18.290111    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 06:00:18.290167    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 06:00:18.290209    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 06:00:18.290291    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:00:18.290324    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.290344    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.290368    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.290888    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 06:00:18.316173    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 06:00:18.340496    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 06:00:18.367130    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 06:00:18.393037    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 06:00:18.413355    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 06:00:18.433154    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 06:00:18.452867    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 06:00:18.472751    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 06:00:18.492599    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 06:00:18.512435    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 06:00:18.531920    4495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 06:00:18.545336    4495 ssh_runner.go:195] Run: openssl version
	I0816 06:00:18.549529    4495 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 06:00:18.549586    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 06:00:18.557821    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561172    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561269    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561303    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.565440    4495 command_runner.go:130] > 51391683
	I0816 06:00:18.565487    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 06:00:18.573717    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 06:00:18.582023    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585446    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585524    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585558    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.589588    4495 command_runner.go:130] > 3ec20f2e
	I0816 06:00:18.589734    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 06:00:18.597966    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 06:00:18.606288    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609721    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609776    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609813    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.613962    4495 command_runner.go:130] > b5213941
	I0816 06:00:18.614025    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 06:00:18.622223    4495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 06:00:18.625651    4495 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 06:00:18.625661    4495 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 06:00:18.625666    4495 command_runner.go:130] > Device: 253,1	Inode: 5242679     Links: 1
	I0816 06:00:18.625671    4495 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 06:00:18.625677    4495 command_runner.go:130] > Access: 2024-08-16 12:57:57.786133727 +0000
	I0816 06:00:18.625682    4495 command_runner.go:130] > Modify: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625687    4495 command_runner.go:130] > Change: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625691    4495 command_runner.go:130] >  Birth: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625754    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 06:00:18.630112    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.630187    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 06:00:18.634437    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.634494    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 06:00:18.638801    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.638908    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 06:00:18.643107    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.643222    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 06:00:18.647403    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.647443    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 06:00:18.651643    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.651701    4495 kubeadm.go:392] StartCluster: {Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:18.651803    4495 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 06:00:18.664662    4495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 06:00:18.672135    4495 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0816 06:00:18.672152    4495 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0816 06:00:18.672158    4495 command_runner.go:130] > /var/lib/minikube/etcd:
	I0816 06:00:18.672162    4495 command_runner.go:130] > member
	I0816 06:00:18.672188    4495 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 06:00:18.672201    4495 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 06:00:18.672247    4495 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 06:00:18.680637    4495 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 06:00:18.680951    4495 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-120000" does not appear in /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:18.681030    4495 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1009/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-120000" cluster setting kubeconfig missing "multinode-120000" context setting]
	I0816 06:00:18.681214    4495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:18.681863    4495 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:18.682072    4495 kapi.go:59] client config for multinode-120000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc6caf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 06:00:18.682394    4495 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 06:00:18.682572    4495 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 06:00:18.689783    4495 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0816 06:00:18.689803    4495 kubeadm.go:1160] stopping kube-system containers ...
	I0816 06:00:18.689858    4495 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 06:00:18.704984    4495 command_runner.go:130] > f09b2d4d9690
	I0816 06:00:18.705001    4495 command_runner.go:130] > 856dd8770ce9
	I0816 06:00:18.705005    4495 command_runner.go:130] > 24fec6612d93
	I0816 06:00:18.705009    4495 command_runner.go:130] > 5ae7eceff676
	I0816 06:00:18.705013    4495 command_runner.go:130] > 422de4039b19
	I0816 06:00:18.705034    4495 command_runner.go:130] > 701ae173eac2
	I0816 06:00:18.705041    4495 command_runner.go:130] > 7fb2b2ed4016
	I0816 06:00:18.705045    4495 command_runner.go:130] > 796b051433aa
	I0816 06:00:18.705048    4495 command_runner.go:130] > 5901c509532d
	I0816 06:00:18.705053    4495 command_runner.go:130] > 26d48b6ad6fb
	I0816 06:00:18.705057    4495 command_runner.go:130] > 157135701f7d
	I0816 06:00:18.705061    4495 command_runner.go:130] > a5500cc4ab0e
	I0816 06:00:18.705064    4495 command_runner.go:130] > a92131c1b00a
	I0816 06:00:18.705067    4495 command_runner.go:130] > cbed74cdc18e
	I0816 06:00:18.705074    4495 command_runner.go:130] > df82653f7f9d
	I0816 06:00:18.705077    4495 command_runner.go:130] > c6d3cc10ad7c
	I0816 06:00:18.705080    4495 command_runner.go:130] > 01366dfa40b1
	I0816 06:00:18.705084    4495 command_runner.go:130] > 11af48a0790c
	I0816 06:00:18.705087    4495 command_runner.go:130] > 971f82e6187b
	I0816 06:00:18.705092    4495 command_runner.go:130] > cbb55d45a02c
	I0816 06:00:18.705096    4495 command_runner.go:130] > 10f645568130
	I0816 06:00:18.705099    4495 command_runner.go:130] > deee90d52a28
	I0816 06:00:18.705102    4495 command_runner.go:130] > d370d863b181
	I0816 06:00:18.705105    4495 command_runner.go:130] > f15fb0af4dd4
	I0816 06:00:18.705108    4495 command_runner.go:130] > a92ac57224e4
	I0816 06:00:18.705111    4495 command_runner.go:130] > 83daf80db5c2
	I0816 06:00:18.705114    4495 command_runner.go:130] > d6c5415334b1
	I0816 06:00:18.705118    4495 command_runner.go:130] > c7a8ef69797c
	I0816 06:00:18.705121    4495 command_runner.go:130] > fb52eeaf600d
	I0816 06:00:18.705125    4495 command_runner.go:130] > 69c4ad6acb20
	I0816 06:00:18.705128    4495 command_runner.go:130] > 30668a85ecab
	I0816 06:00:18.705756    4495 docker.go:483] Stopping containers: [f09b2d4d9690 856dd8770ce9 24fec6612d93 5ae7eceff676 422de4039b19 701ae173eac2 7fb2b2ed4016 796b051433aa 5901c509532d 26d48b6ad6fb 157135701f7d a5500cc4ab0e a92131c1b00a cbed74cdc18e df82653f7f9d c6d3cc10ad7c 01366dfa40b1 11af48a0790c 971f82e6187b cbb55d45a02c 10f645568130 deee90d52a28 d370d863b181 f15fb0af4dd4 a92ac57224e4 83daf80db5c2 d6c5415334b1 c7a8ef69797c fb52eeaf600d 69c4ad6acb20 30668a85ecab]
	I0816 06:00:18.705835    4495 ssh_runner.go:195] Run: docker stop f09b2d4d9690 856dd8770ce9 24fec6612d93 5ae7eceff676 422de4039b19 701ae173eac2 7fb2b2ed4016 796b051433aa 5901c509532d 26d48b6ad6fb 157135701f7d a5500cc4ab0e a92131c1b00a cbed74cdc18e df82653f7f9d c6d3cc10ad7c 01366dfa40b1 11af48a0790c 971f82e6187b cbb55d45a02c 10f645568130 deee90d52a28 d370d863b181 f15fb0af4dd4 a92ac57224e4 83daf80db5c2 d6c5415334b1 c7a8ef69797c fb52eeaf600d 69c4ad6acb20 30668a85ecab
	I0816 06:00:18.720707    4495 command_runner.go:130] > f09b2d4d9690
	I0816 06:00:18.720720    4495 command_runner.go:130] > 856dd8770ce9
	I0816 06:00:18.720724    4495 command_runner.go:130] > 24fec6612d93
	I0816 06:00:18.721258    4495 command_runner.go:130] > 5ae7eceff676
	I0816 06:00:18.721337    4495 command_runner.go:130] > 422de4039b19
	I0816 06:00:18.722561    4495 command_runner.go:130] > 701ae173eac2
	I0816 06:00:18.722569    4495 command_runner.go:130] > 7fb2b2ed4016
	I0816 06:00:18.722572    4495 command_runner.go:130] > 796b051433aa
	I0816 06:00:18.722576    4495 command_runner.go:130] > 5901c509532d
	I0816 06:00:18.722579    4495 command_runner.go:130] > 26d48b6ad6fb
	I0816 06:00:18.722583    4495 command_runner.go:130] > 157135701f7d
	I0816 06:00:18.722586    4495 command_runner.go:130] > a5500cc4ab0e
	I0816 06:00:18.722590    4495 command_runner.go:130] > a92131c1b00a
	I0816 06:00:18.722593    4495 command_runner.go:130] > cbed74cdc18e
	I0816 06:00:18.722597    4495 command_runner.go:130] > df82653f7f9d
	I0816 06:00:18.722600    4495 command_runner.go:130] > c6d3cc10ad7c
	I0816 06:00:18.722603    4495 command_runner.go:130] > 01366dfa40b1
	I0816 06:00:18.722607    4495 command_runner.go:130] > 11af48a0790c
	I0816 06:00:18.722610    4495 command_runner.go:130] > 971f82e6187b
	I0816 06:00:18.722623    4495 command_runner.go:130] > cbb55d45a02c
	I0816 06:00:18.722627    4495 command_runner.go:130] > 10f645568130
	I0816 06:00:18.722630    4495 command_runner.go:130] > deee90d52a28
	I0816 06:00:18.722633    4495 command_runner.go:130] > d370d863b181
	I0816 06:00:18.722637    4495 command_runner.go:130] > f15fb0af4dd4
	I0816 06:00:18.722652    4495 command_runner.go:130] > a92ac57224e4
	I0816 06:00:18.722659    4495 command_runner.go:130] > 83daf80db5c2
	I0816 06:00:18.722662    4495 command_runner.go:130] > d6c5415334b1
	I0816 06:00:18.722666    4495 command_runner.go:130] > c7a8ef69797c
	I0816 06:00:18.722670    4495 command_runner.go:130] > fb52eeaf600d
	I0816 06:00:18.722677    4495 command_runner.go:130] > 69c4ad6acb20
	I0816 06:00:18.722681    4495 command_runner.go:130] > 30668a85ecab
	I0816 06:00:18.723245    4495 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 06:00:18.735477    4495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 06:00:18.742861    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0816 06:00:18.742883    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0816 06:00:18.742890    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0816 06:00:18.742897    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 06:00:18.742921    4495 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 06:00:18.742926    4495 kubeadm.go:157] found existing configuration files:
	
	I0816 06:00:18.742969    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 06:00:18.749869    4495 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 06:00:18.749886    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 06:00:18.749919    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 06:00:18.757017    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 06:00:18.764154    4495 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 06:00:18.764171    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 06:00:18.764207    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 06:00:18.771390    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 06:00:18.778374    4495 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 06:00:18.778486    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 06:00:18.778526    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 06:00:18.785736    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 06:00:18.792788    4495 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 06:00:18.792810    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 06:00:18.792853    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 06:00:18.800180    4495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 06:00:18.807474    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:18.876944    4495 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 06:00:18.877095    4495 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0816 06:00:18.877286    4495 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0816 06:00:18.877428    4495 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 06:00:18.877660    4495 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0816 06:00:18.877817    4495 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0816 06:00:18.878113    4495 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0816 06:00:18.878292    4495 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0816 06:00:18.878442    4495 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0816 06:00:18.878558    4495 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 06:00:18.878682    4495 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 06:00:18.878862    4495 command_runner.go:130] > [certs] Using the existing "sa" key
	I0816 06:00:18.879716    4495 command_runner.go:130] ! W0816 13:00:19.014566    1405 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:18.879730    4495 command_runner.go:130] ! W0816 13:00:19.015145    1405 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:18.879822    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:18.913639    4495 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 06:00:19.126123    4495 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 06:00:19.196694    4495 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 06:00:19.271744    4495 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 06:00:19.471211    4495 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 06:00:19.536733    4495 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 06:00:19.538626    4495 command_runner.go:130] ! W0816 13:00:19.052252    1410 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.538650    4495 command_runner.go:130] ! W0816 13:00:19.052914    1410 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.538687    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.588124    4495 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 06:00:19.592966    4495 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 06:00:19.592976    4495 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0816 06:00:19.696720    4495 command_runner.go:130] ! W0816 13:00:19.714655    1414 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.696744    4495 command_runner.go:130] ! W0816 13:00:19.715264    1414 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.696756    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.752739    4495 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 06:00:19.752753    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 06:00:19.754873    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 06:00:19.755884    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 06:00:19.758437    4495 command_runner.go:130] ! W0816 13:00:19.893078    1441 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.758458    4495 command_runner.go:130] ! W0816 13:00:19.893553    1441 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.758532    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.871944    4495 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 06:00:19.877346    4495 command_runner.go:130] ! W0816 13:00:20.007621    1449 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.877370    4495 command_runner.go:130] ! W0816 13:00:20.008144    1449 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.877395    4495 api_server.go:52] waiting for apiserver process to appear ...
	I0816 06:00:19.877446    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:20.379632    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:20.878610    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.377548    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.878597    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.890976    4495 command_runner.go:130] > 1735
	I0816 06:00:21.891116    4495 api_server.go:72] duration metric: took 2.013765981s to wait for apiserver process to appear ...
	I0816 06:00:21.891126    4495 api_server.go:88] waiting for apiserver healthz status ...
	I0816 06:00:21.891144    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.629177    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 06:00:23.629204    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 06:00:23.629213    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.650033    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 06:00:23.650048    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 06:00:23.891437    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.903552    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 06:00:23.903567    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 06:00:24.391754    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:24.394976    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 06:00:24.394988    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 06:00:24.891376    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:24.897310    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0816 06:00:24.897388    4495 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0816 06:00:24.897396    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:24.897405    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:24.897408    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:24.902527    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:24.902536    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:24.902541    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:24.902545    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:24.902549    4495 round_trippers.go:580]     Content-Length: 263
	I0816 06:00:24.902552    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:24.902554    4495 round_trippers.go:580]     Audit-Id: b0213136-2235-4ab5-968c-cd4581e041f0
	I0816 06:00:24.902556    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:24.902558    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:24.902579    4495 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0816 06:00:24.902624    4495 api_server.go:141] control plane version: v1.31.0
	I0816 06:00:24.902634    4495 api_server.go:131] duration metric: took 3.011562911s to wait for apiserver health ...
	I0816 06:00:24.902639    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:24.902643    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:24.940103    4495 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 06:00:24.960494    4495 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 06:00:24.965300    4495 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0816 06:00:24.965324    4495 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0816 06:00:24.965331    4495 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0816 06:00:24.965336    4495 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 06:00:24.965340    4495 command_runner.go:130] > Access: 2024-08-16 13:00:11.117693160 +0000
	I0816 06:00:24.965345    4495 command_runner.go:130] > Modify: 2024-08-14 20:00:07.000000000 +0000
	I0816 06:00:24.965350    4495 command_runner.go:130] > Change: 2024-08-16 13:00:08.930127539 +0000
	I0816 06:00:24.965353    4495 command_runner.go:130] >  Birth: -
	I0816 06:00:24.965491    4495 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 06:00:24.965500    4495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 06:00:24.980343    4495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 06:00:25.417057    4495 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0816 06:00:25.417072    4495 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0816 06:00:25.417077    4495 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0816 06:00:25.417081    4495 command_runner.go:130] > daemonset.apps/kindnet configured
	I0816 06:00:25.417181    4495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 06:00:25.417223    4495 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 06:00:25.417233    4495 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 06:00:25.417281    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:25.417286    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.417292    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.417296    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.420085    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.420093    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.420099    4495 round_trippers.go:580]     Audit-Id: 23577c90-0bb1-4f7c-9a81-9fee35307577
	I0816 06:00:25.420102    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.420105    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.420108    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.420110    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.420112    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.421556    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1226"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90806 chars]
	I0816 06:00:25.424642    4495 system_pods.go:59] 12 kube-system pods found
	I0816 06:00:25.424658    4495 system_pods.go:61] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 06:00:25.424664    4495 system_pods.go:61] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 06:00:25.424669    4495 system_pods.go:61] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:25.424673    4495 system_pods.go:61] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:25.424678    4495 system_pods.go:61] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 06:00:25.424685    4495 system_pods.go:61] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 06:00:25.424690    4495 system_pods.go:61] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 06:00:25.424697    4495 system_pods.go:61] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 06:00:25.424701    4495 system_pods.go:61] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:25.424704    4495 system_pods.go:61] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:25.424708    4495 system_pods.go:61] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 06:00:25.424712    4495 system_pods.go:61] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:25.424716    4495 system_pods.go:74] duration metric: took 7.527876ms to wait for pod list to return data ...
	I0816 06:00:25.424721    4495 node_conditions.go:102] verifying NodePressure condition ...
	I0816 06:00:25.424758    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:25.424762    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.424768    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.424770    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.426447    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.426456    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.426461    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.426465    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.426469    4495 round_trippers.go:580]     Audit-Id: f8dd1ea7-5255-4e2a-b32e-e6d9ddd07538
	I0816 06:00:25.426472    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.426475    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.426477    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.426636    4495 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1226"},"items":[{"metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10144 chars]
	I0816 06:00:25.427052    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:25.427064    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:25.427072    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:25.427076    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:25.427080    4495 node_conditions.go:105] duration metric: took 2.355042ms to run NodePressure ...
	I0816 06:00:25.427089    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:25.573807    4495 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0816 06:00:25.723217    4495 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0816 06:00:25.724311    4495 command_runner.go:130] ! W0816 13:00:25.658424    2250 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:25.724332    4495 command_runner.go:130] ! W0816 13:00:25.658861    2250 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:25.724346    4495 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 06:00:25.724405    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0816 06:00:25.724411    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.724416    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.724421    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.726729    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.726739    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.726744    4495 round_trippers.go:580]     Audit-Id: cc78fe2a-eab3-44ca-8902-147009b93ca2
	I0816 06:00:25.726749    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.726752    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.726755    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.726758    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.726760    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.727321    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1232"},"items":[{"metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1162","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 31223 chars]
	I0816 06:00:25.728024    4495 kubeadm.go:739] kubelet initialised
	I0816 06:00:25.728034    4495 kubeadm.go:740] duration metric: took 3.679784ms waiting for restarted kubelet to initialise ...
	I0816 06:00:25.728041    4495 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:25.728071    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:25.728076    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.728081    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.728086    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.729698    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.729705    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.729710    4495 round_trippers.go:580]     Audit-Id: 05ad863b-28d8-480c-9de8-26a2f4258f47
	I0816 06:00:25.729716    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.729720    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.729724    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.729729    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.729733    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.730435    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1232"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89949 chars]
	I0816 06:00:25.732286    4495 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.732322    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:25.732327    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.732333    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.732336    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.733399    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.733408    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.733415    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.733425    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.733430    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.733435    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.733440    4495 round_trippers.go:580]     Audit-Id: 01517cdb-0de0-4cb1-aea3-3efd79fb52fc
	I0816 06:00:25.733442    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.733553    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:25.733792    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.733799    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.733804    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.733808    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.734835    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.734842    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.734848    4495 round_trippers.go:580]     Audit-Id: 9973ba49-60dc-4b43-9ca8-b6c9059303eb
	I0816 06:00:25.734851    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.734855    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.734859    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.734864    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.734869    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.734977    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.735143    4495 pod_ready.go:98] node "multinode-120000" hosting pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.735152    4495 pod_ready.go:82] duration metric: took 2.857102ms for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.735157    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.735163    4495 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.735190    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-120000
	I0816 06:00:25.735195    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.735200    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.735203    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.736148    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.736155    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.736160    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.736165    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.736173    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.736176    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.736179    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.736181    4495 round_trippers.go:580]     Audit-Id: cf8ef4ef-e416-4edc-8b8f-7b1951b093c3
	I0816 06:00:25.736290    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1162","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6888 chars]
	I0816 06:00:25.736506    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.736513    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.736519    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.736522    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.737386    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.737394    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.737399    4495 round_trippers.go:580]     Audit-Id: 97b12066-edce-4362-b4b0-406a5c2db88f
	I0816 06:00:25.737423    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.737427    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.737430    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.737434    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.737437    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.737512    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.737673    4495 pod_ready.go:98] node "multinode-120000" hosting pod "etcd-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.737681    4495 pod_ready.go:82] duration metric: took 2.51426ms for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.737687    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "etcd-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.737697    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.737722    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-120000
	I0816 06:00:25.737727    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.737732    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.737735    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.738701    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.738709    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.738714    4495 round_trippers.go:580]     Audit-Id: b2d355e4-df38-4ab8-9d62-c1a6bb40d6ff
	I0816 06:00:25.738719    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.738723    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.738728    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.738732    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.738737    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.738849    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-120000","namespace":"kube-system","uid":"6811daff-acfb-4752-939b-3d084a8a4c9a","resourceVersion":"1180","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.mirror":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479305Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0816 06:00:25.739067    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.739074    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.739079    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.739083    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.740009    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.740016    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.740021    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.740025    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.740029    4495 round_trippers.go:580]     Audit-Id: 24c29403-1f7b-48ae-93a5-9697e6ec2d8e
	I0816 06:00:25.740033    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.740036    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.740048    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.740148    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.740312    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-apiserver-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.740321    4495 pod_ready.go:82] duration metric: took 2.619144ms for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.740326    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-apiserver-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.740331    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.740358    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-120000
	I0816 06:00:25.740363    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.740368    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.740372    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.741298    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.741305    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.741309    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.741314    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.741318    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.741321    4495 round_trippers.go:580]     Audit-Id: 0f7db6e3-eed2-4c72-a02c-3eeaf9a32775
	I0816 06:00:25.741325    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.741328    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.741443    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-120000","namespace":"kube-system","uid":"67f0047c-62f5-4c90-bee3-40dc18cb33e6","resourceVersion":"1164","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.mirror":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0816 06:00:25.817742    4495 request.go:632] Waited for 76.054567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.817838    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.817851    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.817863    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.817874    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.820036    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.820049    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.820056    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.820061    4495 round_trippers.go:580]     Audit-Id: 53261c96-093b-47b4-8b80-8acc12f82fc6
	I0816 06:00:25.820066    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.820070    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.820075    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.820079    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.820399    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.820671    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-controller-manager-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.820685    4495 pod_ready.go:82] duration metric: took 80.349079ms for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.820693    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-controller-manager-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.820701    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.017795    4495 request.go:632] Waited for 197.051168ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:26.017943    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:26.017961    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.017977    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.017985    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.020436    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.020452    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.020460    4495 round_trippers.go:580]     Audit-Id: e99291a1-6610-4a48-8c1e-071af19761ec
	I0816 06:00:26.020465    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.020469    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.020473    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.020477    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.020481    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.020590    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msbdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dff96db-7737-4e41-a130-a356e3acfd78","resourceVersion":"1229","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0816 06:00:26.218687    4495 request.go:632] Waited for 197.736243ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:26.218777    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:26.218788    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.218800    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.218806    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.221733    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.221746    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.221753    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.221757    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.221798    4495 round_trippers.go:580]     Audit-Id: 948120ca-1b7c-4af3-86bb-5928f644b442
	I0816 06:00:26.221834    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.221841    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.221844    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.222464    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:26.222688    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-proxy-msbdc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:26.222699    4495 pod_ready.go:82] duration metric: took 401.999873ms for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:26.222706    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-proxy-msbdc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:26.222711    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.417715    4495 request.go:632] Waited for 194.965416ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:26.417821    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:26.417833    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.417844    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.417850    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.419933    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.419950    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.419958    4495 round_trippers.go:580]     Audit-Id: a27eaab3-1fba-4261-86c8-4e82bd692723
	I0816 06:00:26.419963    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.419968    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.419973    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.419979    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.419985    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.420386    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vskxm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9b8ca3d-b5bd-4c44-8579-8b31879629ad","resourceVersion":"1104","creationTimestamp":"2024-08-16T12:56:05Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:56:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:26.618403    4495 request.go:632] Waited for 197.652521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:26.618498    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:26.618509    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.618520    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.618529    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.621076    4495 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 06:00:26.621098    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.621108    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.621115    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.621122    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.621131    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.621136    4495 round_trippers.go:580]     Content-Length: 210
	I0816 06:00:26.621144    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.621150    4495 round_trippers.go:580]     Audit-Id: 84189acc-381e-445f-ab2d-83d8d34625a0
	I0816 06:00:26.621167    4495 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-120000-m03\" not found","reason":"NotFound","details":{"name":"multinode-120000-m03","kind":"nodes"},"code":404}
	I0816 06:00:26.621365    4495 pod_ready.go:98] node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:26.621383    4495 pod_ready.go:82] duration metric: took 398.673493ms for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:26.621394    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:26.621403    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.817948    4495 request.go:632] Waited for 196.480552ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:26.818086    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:26.818097    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.818109    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.818118    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.821076    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.821091    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.821099    4495 round_trippers.go:580]     Audit-Id: 04ca7c33-61a5-4c68-8785-276e1a624d18
	I0816 06:00:26.821103    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.821108    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.821113    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.821116    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.821119    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.821219    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x88cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"21efba47-35db-47ba-ace5-119b04bf7355","resourceVersion":"1001","creationTimestamp":"2024-08-16T12:55:15Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:55:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:27.017388    4495 request.go:632] Waited for 195.844198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:27.017448    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:27.017453    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.017460    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.017463    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.023306    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:27.023320    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.023326    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.023329    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.023331    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.023334    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.023337    4495 round_trippers.go:580]     Audit-Id: d3066e80-b4e4-42de-b6dd-67c5c1ca1bb5
	I0816 06:00:27.023340    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.023391    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000-m02","uid":"57b3de1e-d3de-4534-9ecc-a0706c682584","resourceVersion":"1019","creationTimestamp":"2024-08-16T12:58:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_16T05_58_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:58:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3805 chars]
	I0816 06:00:27.023559    4495 pod_ready.go:93] pod "kube-proxy-x88cp" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:27.023568    4495 pod_ready.go:82] duration metric: took 402.164034ms for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:27.023575    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:27.217753    4495 request.go:632] Waited for 194.138319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:27.217911    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:27.217921    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.217932    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.217939    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.220647    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:27.220661    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.220668    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.220674    4495 round_trippers.go:580]     Audit-Id: 91c9d444-e56e-411b-b37d-b5ea87e6b50a
	I0816 06:00:27.220678    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.220682    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.220686    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.220690    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.221047    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-120000","namespace":"kube-system","uid":"b8188bb8-5278-422d-86a5-19d70c796638","resourceVersion":"1182","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.mirror":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.seen":"2024-08-16T12:54:27.908480653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0816 06:00:27.418903    4495 request.go:632] Waited for 197.499014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.418992    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.419003    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.419015    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.419025    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.422042    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:27.422058    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.422065    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.422070    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.422073    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.422078    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.422082    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.422087    4495 round_trippers.go:580]     Audit-Id: 32177227-9b65-4b4b-a32b-46d9331444aa
	I0816 06:00:27.422247    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:27.422512    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-scheduler-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:27.422526    4495 pod_ready.go:82] duration metric: took 398.952661ms for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:27.422535    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-scheduler-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:27.422541    4495 pod_ready.go:39] duration metric: took 1.694527427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:27.422554    4495 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 06:00:27.433482    4495 command_runner.go:130] > -16
	I0816 06:00:27.433765    4495 ops.go:34] apiserver oom_adj: -16
	I0816 06:00:27.433777    4495 kubeadm.go:597] duration metric: took 8.761739587s to restartPrimaryControlPlane
	I0816 06:00:27.433782    4495 kubeadm.go:394] duration metric: took 8.782260965s to StartCluster
	I0816 06:00:27.433791    4495 settings.go:142] acquiring lock: {Name:mkb3c8aac25c21025142737c3a236d96f65e9fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:27.433883    4495 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:27.434235    4495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:27.434492    4495 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:00:27.434547    4495 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 06:00:27.434663    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:27.457036    4495 out.go:177] * Verifying Kubernetes components...
	I0816 06:00:27.497964    4495 out.go:177] * Enabled addons: 
	I0816 06:00:27.519104    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:27.555967    4495 addons.go:510] duration metric: took 121.425372ms for enable addons: enabled=[]
	I0816 06:00:27.670896    4495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 06:00:27.683537    4495 node_ready.go:35] waiting up to 6m0s for node "multinode-120000" to be "Ready" ...
	I0816 06:00:27.683593    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.683599    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.683605    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.683610    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.685158    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:27.685166    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.685171    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.685174    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.685178    4495 round_trippers.go:580]     Audit-Id: f0c9398f-4e83-4228-96b6-078b55620838
	I0816 06:00:27.685184    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.685188    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.685203    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.685587    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:28.183809    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:28.183891    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:28.183907    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:28.183914    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:28.186432    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:28.186448    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:28.186454    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:28.186459    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:28.186463    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:28.186467    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:28 GMT
	I0816 06:00:28.186470    4495 round_trippers.go:580]     Audit-Id: 6df01f5a-6454-4835-8410-d6deec65b5ee
	I0816 06:00:28.186473    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:28.186592    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:28.684413    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:28.684439    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:28.684450    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:28.684456    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:28.687367    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:28.687382    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:28.687388    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:28.687392    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:28.687395    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:28.687411    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:28.687418    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:28 GMT
	I0816 06:00:28.687421    4495 round_trippers.go:580]     Audit-Id: f5fbda18-63c7-4d36-a04a-469856ac6643
	I0816 06:00:28.687486    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.184833    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:29.184857    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:29.184868    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:29.184876    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:29.187720    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:29.187736    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:29.187743    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:29.187748    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:29 GMT
	I0816 06:00:29.187752    4495 round_trippers.go:580]     Audit-Id: 081e0a8c-5076-4160-9bf5-e86ebbed3097
	I0816 06:00:29.187758    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:29.187764    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:29.187770    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:29.188009    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.685089    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:29.685112    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:29.685124    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:29.685130    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:29.687629    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:29.687644    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:29.687654    4495 round_trippers.go:580]     Audit-Id: 234ea32c-891a-4ac1-a955-830b5b99ce3a
	I0816 06:00:29.687663    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:29.687668    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:29.687672    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:29.687676    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:29.687682    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:29 GMT
	I0816 06:00:29.687816    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.688079    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:30.184844    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:30.184865    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:30.184877    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:30.184884    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:30.187641    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:30.187660    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:30.187670    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:30.187678    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:30.187689    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:30 GMT
	I0816 06:00:30.187693    4495 round_trippers.go:580]     Audit-Id: c31de6db-811b-46b8-9aa4-623b561b164a
	I0816 06:00:30.187698    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:30.187703    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:30.188119    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:30.685502    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:30.685524    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:30.685536    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:30.685541    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:30.688085    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:30.688104    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:30.688112    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:30.688117    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:30 GMT
	I0816 06:00:30.688121    4495 round_trippers.go:580]     Audit-Id: 68e46f04-879a-40b2-8856-7face0d9c06e
	I0816 06:00:30.688126    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:30.688129    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:30.688133    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:30.688252    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:31.184738    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:31.184763    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:31.184774    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:31.184782    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:31.187254    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:31.187274    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:31.187284    4495 round_trippers.go:580]     Audit-Id: 57b4bd15-0c5c-4895-b2ea-51ec55b072eb
	I0816 06:00:31.187290    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:31.187295    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:31.187298    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:31.187302    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:31.187305    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:31 GMT
	I0816 06:00:31.187490    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:31.683735    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:31.683758    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:31.683770    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:31.683778    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:31.686293    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:31.686333    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:31.686346    4495 round_trippers.go:580]     Audit-Id: 9801362e-88df-4675-a2c8-0e91d5be5614
	I0816 06:00:31.686351    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:31.686354    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:31.686357    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:31.686362    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:31.686367    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:31 GMT
	I0816 06:00:31.686420    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:32.184604    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:32.184632    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:32.184644    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:32.184651    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:32.187173    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:32.187193    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:32.187201    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:32.187205    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:32.187210    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:32.187215    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:32.187220    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:32 GMT
	I0816 06:00:32.187224    4495 round_trippers.go:580]     Audit-Id: 1dbf9d45-c794-4697-a381-7fb61cb7609f
	I0816 06:00:32.187552    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:32.187830    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:32.684978    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:32.685004    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:32.685018    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:32.685026    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:32.687730    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:32.687753    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:32.687760    4495 round_trippers.go:580]     Audit-Id: c40c4e2b-e3de-4b0e-81b7-3a40fe487bfc
	I0816 06:00:32.687766    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:32.687771    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:32.687778    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:32.687781    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:32.687784    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:32 GMT
	I0816 06:00:32.688043    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:33.185794    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:33.185819    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:33.185831    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:33.185839    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:33.188963    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:33.188979    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:33.188986    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:33.188991    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:33.188995    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:33.189000    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:33 GMT
	I0816 06:00:33.189003    4495 round_trippers.go:580]     Audit-Id: 42192a3c-1770-4ab8-bdac-a9129416eb0a
	I0816 06:00:33.189007    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:33.189171    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:33.685058    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:33.685086    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:33.685098    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:33.685103    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:33.687708    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:33.687728    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:33.687736    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:33.687741    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:33 GMT
	I0816 06:00:33.687745    4495 round_trippers.go:580]     Audit-Id: 465b534b-271d-4b20-a0a5-d90509c1e5e7
	I0816 06:00:33.687748    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:33.687760    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:33.687764    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:33.687869    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.183737    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:34.183762    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:34.183774    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:34.183780    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:34.186354    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:34.186372    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:34.186379    4495 round_trippers.go:580]     Audit-Id: 40ff16d2-5ea9-46e9-b903-a0d03c7e14e0
	I0816 06:00:34.186383    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:34.186388    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:34.186393    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:34.186398    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:34.186402    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:34 GMT
	I0816 06:00:34.186756    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.684390    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:34.684414    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:34.684425    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:34.684433    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:34.686894    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:34.686911    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:34.686919    4495 round_trippers.go:580]     Audit-Id: ed86880d-2a2e-4592-abc2-f4e2c9222b99
	I0816 06:00:34.686925    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:34.686932    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:34.686937    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:34.686942    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:34.686947    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:34 GMT
	I0816 06:00:34.687229    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.687492    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:35.183952    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:35.183982    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:35.183994    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:35.184001    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:35.186228    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:35.186243    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:35.186251    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:35.186257    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:35 GMT
	I0816 06:00:35.186263    4495 round_trippers.go:580]     Audit-Id: bd5c6e0f-5321-4600-8834-753aad42096d
	I0816 06:00:35.186270    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:35.186276    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:35.186280    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:35.186477    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:35.683869    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:35.683896    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:35.683908    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:35.683917    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:35.686865    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:35.686907    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:35.686919    4495 round_trippers.go:580]     Audit-Id: c0a6029a-329b-4bd4-b351-0e3b6945e48f
	I0816 06:00:35.686925    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:35.686931    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:35.686934    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:35.686938    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:35.686942    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:35 GMT
	I0816 06:00:35.687074    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.185651    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:36.185673    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:36.185684    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:36.185690    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:36.188463    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:36.188478    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:36.188485    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:36 GMT
	I0816 06:00:36.188489    4495 round_trippers.go:580]     Audit-Id: 35a69241-cd12-49d9-bfe6-edf0ffb501d2
	I0816 06:00:36.188494    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:36.188498    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:36.188503    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:36.188506    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:36.188885    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.685754    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:36.685792    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:36.685804    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:36.685814    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:36.688468    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:36.688484    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:36.688492    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:36 GMT
	I0816 06:00:36.688496    4495 round_trippers.go:580]     Audit-Id: a8a2027a-ff3b-4dc3-8bfb-30408f06db19
	I0816 06:00:36.688499    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:36.688503    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:36.688508    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:36.688514    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:36.688657    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.688938    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:37.183701    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:37.183727    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:37.183760    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:37.183774    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:37.186458    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:37.186474    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:37.186481    4495 round_trippers.go:580]     Audit-Id: 2478ee78-4c5b-4854-b2d7-40ed83bfe8ff
	I0816 06:00:37.186486    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:37.186490    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:37.186494    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:37.186497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:37.186518    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:37 GMT
	I0816 06:00:37.186636    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:37.684137    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:37.684165    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:37.684177    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:37.684183    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:37.686997    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:37.687018    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:37.687026    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:37.687030    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:37.687034    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:37.687037    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:37.687041    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:37 GMT
	I0816 06:00:37.687045    4495 round_trippers.go:580]     Audit-Id: 127265de-bf18-4af5-8df5-7bec55284a59
	I0816 06:00:37.687121    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:38.184225    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:38.184253    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:38.184265    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:38.184273    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:38.187123    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:38.187143    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:38.187151    4495 round_trippers.go:580]     Audit-Id: 6a8f5a9d-d3ea-4bb8-a712-7ea75e216b5e
	I0816 06:00:38.187156    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:38.187163    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:38.187170    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:38.187174    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:38.187177    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:38 GMT
	I0816 06:00:38.187381    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:38.683932    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:38.683953    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:38.683963    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:38.683969    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:38.686689    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:38.686709    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:38.686717    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:38.686724    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:38.686730    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:38 GMT
	I0816 06:00:38.686735    4495 round_trippers.go:580]     Audit-Id: 13107fe4-d471-4b9f-a2c0-ca34ceaf8ab9
	I0816 06:00:38.686742    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:38.686755    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:38.686834    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:39.185629    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:39.185654    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:39.185666    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:39.185674    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:39.188313    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:39.188330    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:39.188337    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:39.188341    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:39 GMT
	I0816 06:00:39.188345    4495 round_trippers.go:580]     Audit-Id: eb10f5a2-8dac-4080-825b-536b57d39a01
	I0816 06:00:39.188348    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:39.188351    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:39.188354    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:39.188464    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:39.188717    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:39.683570    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:39.683592    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:39.683601    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:39.683608    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:39.686404    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:39.686424    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:39.686435    4495 round_trippers.go:580]     Audit-Id: aff6da3d-a5e1-488d-b69f-21516c86cf33
	I0816 06:00:39.686442    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:39.686448    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:39.686454    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:39.686458    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:39.686462    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:39 GMT
	I0816 06:00:39.686608    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:40.184776    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:40.184800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:40.184810    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:40.184815    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:40.187560    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:40.187579    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:40.187586    4495 round_trippers.go:580]     Audit-Id: e429582a-c575-4dc0-895b-02821e0827a1
	I0816 06:00:40.187590    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:40.187593    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:40.187598    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:40.187601    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:40.187605    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:40 GMT
	I0816 06:00:40.187691    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:40.684417    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:40.684440    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:40.684453    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:40.684459    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:40.686945    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:40.686960    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:40.686967    4495 round_trippers.go:580]     Audit-Id: 81c9f382-cd6f-4411-81ab-38740da9540e
	I0816 06:00:40.686971    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:40.686975    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:40.686980    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:40.686983    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:40.686987    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:40 GMT
	I0816 06:00:40.687114    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.184726    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:41.184753    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:41.184806    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:41.184819    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:41.187612    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:41.187630    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:41.187637    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:41.187644    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:41 GMT
	I0816 06:00:41.187649    4495 round_trippers.go:580]     Audit-Id: f230e89f-5e88-4a7a-9ffc-76611236aaa1
	I0816 06:00:41.187657    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:41.187663    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:41.187670    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:41.187804    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.683710    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:41.683734    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:41.683746    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:41.683753    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:41.686547    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:41.686566    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:41.686574    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:41.686579    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:41.686583    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:41.686587    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:41.686591    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:41 GMT
	I0816 06:00:41.686594    4495 round_trippers.go:580]     Audit-Id: acf1f7ce-0a7a-4579-a4f5-060d34eacfeb
	I0816 06:00:41.686670    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.686926    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:42.184911    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:42.184938    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:42.184949    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:42.184956    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:42.187592    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:42.187610    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:42.187618    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:42.187623    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:42.187631    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:42.187636    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:42 GMT
	I0816 06:00:42.187646    4495 round_trippers.go:580]     Audit-Id: b7bf8e18-547a-45a8-92e5-25d0ca741602
	I0816 06:00:42.187655    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:42.188153    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:42.683685    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:42.683708    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:42.683720    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:42.683727    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:42.686857    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:42.686871    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:42.686878    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:42.686883    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:42 GMT
	I0816 06:00:42.686888    4495 round_trippers.go:580]     Audit-Id: d0f137dc-e037-45ea-9113-8fb55c2cd3c5
	I0816 06:00:42.686892    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:42.686896    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:42.686901    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:42.687035    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.185152    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:43.185182    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:43.185194    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:43.185199    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:43.187891    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:43.187907    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:43.187927    4495 round_trippers.go:580]     Audit-Id: 055a2ac7-7e92-479f-9d5b-07e6f6b9eced
	I0816 06:00:43.187933    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:43.187936    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:43.187939    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:43.187966    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:43.187976    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:43 GMT
	I0816 06:00:43.188092    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.684397    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:43.684421    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:43.684431    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:43.684438    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:43.686863    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:43.686877    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:43.686884    4495 round_trippers.go:580]     Audit-Id: ca3700c8-a7c6-42a0-aacd-58a30b3608e4
	I0816 06:00:43.686889    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:43.686893    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:43.686897    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:43.686901    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:43.686904    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:43 GMT
	I0816 06:00:43.687211    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.687468    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:44.184703    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.184725    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.184737    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.184743    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.187423    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:44.187436    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.187444    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.187448    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.187452    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.187457    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.187462    4495 round_trippers.go:580]     Audit-Id: 784dc758-68eb-43c3-84fa-47675351fb5a
	I0816 06:00:44.187465    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.187618    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:44.187882    4495 node_ready.go:49] node "multinode-120000" has status "Ready":"True"
	I0816 06:00:44.187897    4495 node_ready.go:38] duration metric: took 16.504665202s for node "multinode-120000" to be "Ready" ...
	I0816 06:00:44.187904    4495 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:44.187937    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:44.187942    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.187948    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.187951    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.189781    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.189790    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.189799    4495 round_trippers.go:580]     Audit-Id: 1a053412-eda8-4ab9-8655-bacda9460671
	I0816 06:00:44.189809    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.189813    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.189817    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.189821    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.189824    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.190831    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1295"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88975 chars]
	I0816 06:00:44.192799    4495 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:44.192835    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:44.192840    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.192846    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.192851    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.194016    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.194023    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.194028    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.194032    4495 round_trippers.go:580]     Audit-Id: 3161d64a-c1e8-4bcf-8d1b-51dd4d9571b6
	I0816 06:00:44.194036    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.194041    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.194048    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.194054    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.194192    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:44.194440    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.194447    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.194453    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.194458    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.195443    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:44.195452    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.195457    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.195460    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.195463    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.195471    4495 round_trippers.go:580]     Audit-Id: 5d3e841f-7e1d-475a-a0d9-e6cfb1ae83a3
	I0816 06:00:44.195475    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.195477    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.195624    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:44.694002    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:44.694071    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.694084    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.694091    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.696555    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:44.696573    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.696585    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.696593    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.696599    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.696607    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.696613    4495 round_trippers.go:580]     Audit-Id: e35f4581-6dd6-4921-b6c8-a03fc475e76f
	I0816 06:00:44.696619    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.696931    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:44.697320    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.697330    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.697338    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.697344    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.698887    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.698895    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.698900    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.698904    4495 round_trippers.go:580]     Audit-Id: eba3e30d-45c9-4664-abb2-01ab58337ffb
	I0816 06:00:44.698908    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.698913    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.698919    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.698923    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.699083    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:45.193590    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:45.193613    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.193624    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.193630    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.196063    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:45.196079    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.196086    4495 round_trippers.go:580]     Audit-Id: d94cc2d6-a5e0-46d1-846b-4e2ffaad5215
	I0816 06:00:45.196090    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.196094    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.196097    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.196100    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.196105    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.196401    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:45.196791    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:45.196800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.196808    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.196813    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.198243    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:45.198250    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.198256    4495 round_trippers.go:580]     Audit-Id: 436a3839-50fc-41e0-aefb-994dfc82586f
	I0816 06:00:45.198259    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.198268    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.198273    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.198276    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.198278    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.198428    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:45.693974    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:45.693997    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.694010    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.694019    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.696758    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:45.696772    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.696779    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.696792    4495 round_trippers.go:580]     Audit-Id: 8415e6f8-e946-41af-a80c-15c5751d219f
	I0816 06:00:45.696797    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.696801    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.696805    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.696809    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.696954    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:45.697357    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:45.697366    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.697374    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.697377    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.698909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:45.698916    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.698920    4495 round_trippers.go:580]     Audit-Id: 134cd26c-86d3-4b3a-af71-84933a0d88ae
	I0816 06:00:45.698924    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.698927    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.698929    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.698932    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.698934    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.699056    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:46.193630    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:46.193654    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.193666    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.193674    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.196336    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:46.196353    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.196360    4495 round_trippers.go:580]     Audit-Id: 33ec9444-68be-434e-8dc0-3ef86e499bbf
	I0816 06:00:46.196363    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.196380    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.196387    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.196397    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.196402    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.196875    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:46.197269    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:46.197279    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.197287    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.197291    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.198719    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:46.198729    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.198734    4495 round_trippers.go:580]     Audit-Id: 9f2089a4-8222-4cd0-ba1c-08e5433423c1
	I0816 06:00:46.198737    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.198739    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.198742    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.198745    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.198747    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.198920    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:46.199101    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:46.695010    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:46.695033    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.695045    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.695052    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.697715    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:46.697730    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.697737    4495 round_trippers.go:580]     Audit-Id: a941da9d-2868-4a08-a222-7c84c0646131
	I0816 06:00:46.697743    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.697747    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.697751    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.697755    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.697759    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.698098    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:46.698495    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:46.698504    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.698512    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.698517    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.699909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:46.699919    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.699927    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.699932    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.699936    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.699938    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.699950    4495 round_trippers.go:580]     Audit-Id: 2e04553b-144d-4c8f-b9f7-6c15ccc65554
	I0816 06:00:46.699954    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.700205    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:47.194074    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:47.194102    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.194114    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.194119    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.197460    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:47.197476    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.197483    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.197488    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.197493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.197497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.197500    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.197513    4495 round_trippers.go:580]     Audit-Id: a7c47070-3a2a-4b06-871c-46f540812c9a
	I0816 06:00:47.197792    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:47.198182    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:47.198192    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.198200    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.198203    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.200341    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:47.200353    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.200362    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.200367    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.200371    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.200375    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.200377    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.200379    4495 round_trippers.go:580]     Audit-Id: 0f8a5e3c-b81c-4d45-b94f-8965cb70e2f0
	I0816 06:00:47.200635    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:47.693737    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:47.693754    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.693762    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.693766    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.695837    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:47.695850    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.695857    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.695863    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.695869    4495 round_trippers.go:580]     Audit-Id: efbcbcb7-7e24-4177-bac6-452bf2000f90
	I0816 06:00:47.695872    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.695878    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.695882    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.695994    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:47.696296    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:47.696303    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.696309    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.696331    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.697653    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:47.697659    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.697664    4495 round_trippers.go:580]     Audit-Id: f3478541-8b91-4cba-b592-4cc42c120e6c
	I0816 06:00:47.697667    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.697670    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.697674    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.697678    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.697682    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.697844    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:48.193093    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:48.193121    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.193132    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.193136    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.195878    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:48.195893    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.195900    4495 round_trippers.go:580]     Audit-Id: 5b2473fa-d9f6-4df6-86a7-ee5024917520
	I0816 06:00:48.195904    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.195909    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.195912    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.195917    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.195921    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.196812    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:48.197831    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:48.197840    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.197847    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.197851    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.199214    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:48.199225    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.199230    4495 round_trippers.go:580]     Audit-Id: a0088d80-1060-4de4-8323-23148ce54ae8
	I0816 06:00:48.199233    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.199235    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.199237    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.199240    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.199242    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.199308    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:48.199512    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:48.694951    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:48.694979    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.694998    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.695036    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.697668    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:48.697681    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.697688    4495 round_trippers.go:580]     Audit-Id: dcfb047e-65f4-4a55-a55c-8cf446c43fd1
	I0816 06:00:48.697693    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.697700    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.697705    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.697710    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.697714    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.697825    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:48.698194    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:48.698203    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.698211    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.698218    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.699789    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:48.699818    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.699828    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.699843    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.699850    4495 round_trippers.go:580]     Audit-Id: 3f5c629b-235c-489e-8403-ecf62ef91337
	I0816 06:00:48.699853    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.699855    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.699858    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.699916    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:49.193572    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:49.193600    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.193611    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.193618    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.196346    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:49.196379    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.196399    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.196419    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.196444    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.196453    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.196457    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.196460    4495 round_trippers.go:580]     Audit-Id: d89ff074-c96d-4832-986d-80d8732b9f71
	I0816 06:00:49.196623    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:49.196993    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:49.197002    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.197010    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.197014    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.198352    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:49.198361    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.198368    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.198373    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.198378    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.198382    4495 round_trippers.go:580]     Audit-Id: 76d08124-64a9-4ca8-8d3b-8d82f00657af
	I0816 06:00:49.198387    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.198392    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.198556    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:49.693984    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:49.694009    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.694018    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.694025    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.696844    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:49.696860    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.696868    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.696873    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.696877    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.696881    4495 round_trippers.go:580]     Audit-Id: db944d8d-52d6-4dbf-b101-043bff7ea90c
	I0816 06:00:49.696886    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.696890    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.697015    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:49.697386    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:49.697395    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.697403    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.697429    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.698921    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:49.698929    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.698934    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.698937    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.698940    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.698942    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.698945    4495 round_trippers.go:580]     Audit-Id: eb99aa58-e1f9-4954-bd62-6c0c0bc2dbaa
	I0816 06:00:49.698948    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.699094    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.193122    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:50.193150    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.193161    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.193167    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.195943    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:50.195961    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.195969    4495 round_trippers.go:580]     Audit-Id: dede90ac-f9e8-4e5d-a71c-343bccbefb69
	I0816 06:00:50.195974    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.195977    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.196004    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.196012    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.196018    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.196121    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:50.196494    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:50.196504    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.196512    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.196526    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.197909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:50.197918    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.197923    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.197925    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.197927    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.197930    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.197933    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.197936    4495 round_trippers.go:580]     Audit-Id: e3821546-56c4-429d-a7ca-409ed0b5c376
	I0816 06:00:50.198094    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.694562    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:50.694590    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.694602    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.694608    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.697242    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:50.697257    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.697265    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.697270    4495 round_trippers.go:580]     Audit-Id: e2e06ee0-c869-4597-bf8b-da844e8a006b
	I0816 06:00:50.697274    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.697278    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.697282    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.697285    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.697524    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:50.697896    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:50.697906    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.697914    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.697919    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.699459    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:50.699468    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.699476    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.699495    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.699504    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.699521    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.699527    4495 round_trippers.go:580]     Audit-Id: ea89083e-33ba-4f08-a0f3-e5d1464b020e
	I0816 06:00:50.699529    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.699849    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.700021    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:51.193059    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:51.193081    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.193092    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.193098    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.195867    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:51.195880    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.195888    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.195893    4495 round_trippers.go:580]     Audit-Id: b4125754-b93d-411d-9ced-26b31f4bd1d4
	I0816 06:00:51.195897    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.195901    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.195904    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.195908    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.196060    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:51.196429    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:51.196438    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.196446    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.196451    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.197916    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:51.197927    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.197934    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.197940    4495 round_trippers.go:580]     Audit-Id: 26bdde0b-f5e4-4d5e-a93b-8cb8802fe6f6
	I0816 06:00:51.197944    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.197948    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.197951    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.197955    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.198186    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:51.693604    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:51.693626    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.693638    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.693647    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.696401    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:51.696419    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.696430    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.696438    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.696443    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.696465    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.696474    4495 round_trippers.go:580]     Audit-Id: 6089f18d-0734-4927-9662-70ea2b4f65c0
	I0816 06:00:51.696477    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.696769    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:51.697136    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:51.697146    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.697154    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.697160    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.698713    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:51.698721    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.698727    4495 round_trippers.go:580]     Audit-Id: 2be2a0c6-6e5d-435e-98ca-3cd9328ce88e
	I0816 06:00:51.698732    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.698749    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.698753    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.698756    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.698758    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.698866    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:52.193095    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:52.193123    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.193135    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.193144    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.196556    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:52.196573    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.196580    4495 round_trippers.go:580]     Audit-Id: 5c23a1f6-b54d-43e5-b2a5-4f0333cc8019
	I0816 06:00:52.196585    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.196613    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.196622    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.196626    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.196629    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.196778    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:52.197152    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:52.197162    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.197171    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.197186    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.198650    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:52.198659    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.198663    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.198668    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.198673    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.198678    4495 round_trippers.go:580]     Audit-Id: daf0ad65-6369-4382-9cd9-7d81c8a90d71
	I0816 06:00:52.198681    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.198684    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.198798    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:52.694180    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:52.694202    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.694216    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.694221    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.696755    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:52.696769    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.696776    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.696794    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.696798    4495 round_trippers.go:580]     Audit-Id: 91a685d0-8bda-4d82-b91a-1f01d8f9479f
	I0816 06:00:52.696802    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.696805    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.696809    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.696916    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:52.697297    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:52.697312    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.697321    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.697326    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.699106    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:52.699121    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.699128    4495 round_trippers.go:580]     Audit-Id: a0dfb14e-e815-43fb-830c-1d068661317a
	I0816 06:00:52.699135    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.699140    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.699143    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.699146    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.699149    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.699221    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:53.194387    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:53.194413    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.194425    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.194431    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.197272    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:53.197291    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.197299    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.197306    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.197313    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.197318    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.197323    4495 round_trippers.go:580]     Audit-Id: 61747d54-260b-44e3-a41a-4a5d32cdb57d
	I0816 06:00:53.197328    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.197516    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:53.197897    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:53.197907    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.197915    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.197920    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.199434    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:53.199443    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.199447    4495 round_trippers.go:580]     Audit-Id: 49a5e8ba-a111-4faf-87fd-2774dd7befa8
	I0816 06:00:53.199449    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.199452    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.199455    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.199457    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.199459    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.199524    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:53.199691    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:53.693934    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:53.693955    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.693966    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.693974    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.696451    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:53.696465    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.696475    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.696480    4495 round_trippers.go:580]     Audit-Id: 97031ce4-149d-4698-8484-fbd4e6766633
	I0816 06:00:53.696486    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.696489    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.696493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.696497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.696797    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:53.697162    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:53.697170    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.697175    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.697180    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.698224    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:53.698233    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.698237    4495 round_trippers.go:580]     Audit-Id: f14f9405-50a7-4668-8bc8-23ee75a08697
	I0816 06:00:53.698240    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.698244    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.698249    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.698252    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.698255    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.698406    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:54.192926    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:54.192950    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.192962    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.192968    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.195264    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:54.195276    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.195283    4495 round_trippers.go:580]     Audit-Id: 35341799-17d6-4c03-a451-30b6b32c071d
	I0816 06:00:54.195289    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.195294    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.195300    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.195304    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.195308    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.195400    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:54.195765    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:54.195774    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.195782    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.195786    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.197260    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.197269    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.197274    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.197277    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.197280    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.197284    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.197287    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.197289    4495 round_trippers.go:580]     Audit-Id: aa30861c-6892-49f9-ba64-5ee6d5cb60c4
	I0816 06:00:54.197343    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:54.692787    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:54.692800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.692807    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.692810    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.694497    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.694507    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.694512    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.694515    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.694518    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.694520    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.694523    4495 round_trippers.go:580]     Audit-Id: 9bbfb7f9-5e9f-44ba-bbda-b0e0a42df8a5
	I0816 06:00:54.694526    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.694614    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:54.694910    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:54.694917    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.694923    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.694926    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.696051    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.696061    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.696067    4495 round_trippers.go:580]     Audit-Id: 1920a61c-7534-468f-8dc2-a357ba5ebf92
	I0816 06:00:54.696072    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.696075    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.696080    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.696085    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.696089    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.696337    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.192902    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:55.192925    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.192937    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.192943    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.195900    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:55.195919    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.195926    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.195931    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.195934    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.195939    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.195964    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.195972    4495 round_trippers.go:580]     Audit-Id: cf3f6819-6604-4fa8-985b-301e5c71d88d
	I0816 06:00:55.196184    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:55.196552    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:55.196562    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.196570    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.196575    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.197904    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.197912    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.197917    4495 round_trippers.go:580]     Audit-Id: 04a0a47c-a7ee-4a21-a6ad-e505f4196893
	I0816 06:00:55.197920    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.197938    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.197945    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.197950    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.197955    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.198184    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.692829    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:55.692844    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.692850    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.692855    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.694681    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.694689    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.694694    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.694697    4495 round_trippers.go:580]     Audit-Id: 5f4f3be9-7ccc-4d14-9fc6-7cb349bfb7b6
	I0816 06:00:55.694699    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.694704    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.694707    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.694710    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.694825    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:55.695111    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:55.695118    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.695124    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.695129    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.696315    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.696322    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.696328    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.696332    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.696338    4495 round_trippers.go:580]     Audit-Id: 1c2c4641-b582-4b29-ab5d-f92f58f250c4
	I0816 06:00:55.696347    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.696354    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.696364    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.696640    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.696816    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:56.194306    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:56.194326    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.194338    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.194344    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.197337    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:56.197352    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.197362    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.197371    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.197378    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.197383    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.197388    4495 round_trippers.go:580]     Audit-Id: cecd2419-a35c-4a96-bbef-b6262ad05886
	I0816 06:00:56.197394    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.197782    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:56.198082    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:56.198090    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.198096    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.198099    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.199288    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:56.199296    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.199300    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.199318    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.199323    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.199326    4495 round_trippers.go:580]     Audit-Id: 341c369b-d4ef-4095-87e9-9181a6560b9e
	I0816 06:00:56.199329    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.199331    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.199700    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:56.693955    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:56.693979    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.693991    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.693999    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.696425    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:56.696442    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.696452    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.696458    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.696466    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.696474    4495 round_trippers.go:580]     Audit-Id: ba031bef-2cc6-493c-b2b4-1647f1036dcf
	I0816 06:00:56.696480    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.696484    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.696919    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:56.697287    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:56.697296    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.697304    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.697308    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.698764    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:56.698772    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.698777    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.698781    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.698783    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.698788    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.698792    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.698795    4495 round_trippers.go:580]     Audit-Id: 5d7dfbf4-db55-42e6-a1c1-23c5c051ee87
	I0816 06:00:56.698979    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.193086    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:57.193109    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.193121    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.193128    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.195756    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:57.195770    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.195779    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.195784    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.195788    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.195792    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.195796    4495 round_trippers.go:580]     Audit-Id: 7bfa9069-4ac3-4fbf-b1fa-4ad513a19ede
	I0816 06:00:57.195799    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.195938    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7040 chars]
	I0816 06:00:57.196320    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.196327    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.196332    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.196365    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.197653    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.197665    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.197671    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.197675    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.197679    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.197682    4495 round_trippers.go:580]     Audit-Id: 7ca4b854-8de7-4028-8dea-80170976795b
	I0816 06:00:57.197685    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.197688    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.197769    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.198023    4495 pod_ready.go:93] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.198031    4495 pod_ready.go:82] duration metric: took 13.005479196s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.198051    4495 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.198102    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-120000
	I0816 06:00:57.198106    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.198112    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.198117    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.199301    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.199312    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.199319    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.199322    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.199326    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.199329    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.199334    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.199337    4495 round_trippers.go:580]     Audit-Id: 776899db-76a1-4520-91dc-1a232042cc6b
	I0816 06:00:57.199529    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1278","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6664 chars]
	I0816 06:00:57.199751    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.199758    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.199763    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.199767    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.200815    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.200824    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.200829    4495 round_trippers.go:580]     Audit-Id: c1eb6eb0-9fdb-4aca-9fe0-ec00ecb240a4
	I0816 06:00:57.200833    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.200836    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.200839    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.200843    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.200847    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.201128    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.201301    4495 pod_ready.go:93] pod "etcd-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.201309    4495 pod_ready.go:82] duration metric: took 3.253593ms for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.201318    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.201346    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-120000
	I0816 06:00:57.201351    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.201356    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.201359    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.202430    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.202441    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.202449    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.202455    4495 round_trippers.go:580]     Audit-Id: 131388fe-c69c-4328-b724-70219eb7e2cb
	I0816 06:00:57.202460    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.202463    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.202468    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.202471    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.202638    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-120000","namespace":"kube-system","uid":"6811daff-acfb-4752-939b-3d084a8a4c9a","resourceVersion":"1282","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.mirror":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479305Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0816 06:00:57.202863    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.202871    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.202876    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.202879    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.203876    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:57.203887    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.203895    4495 round_trippers.go:580]     Audit-Id: b14c3d4e-a101-4d94-a887-2d4e9659b31b
	I0816 06:00:57.203899    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.203912    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.203919    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.203922    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.203925    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.204052    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.204213    4495 pod_ready.go:93] pod "kube-apiserver-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.204221    4495 pod_ready.go:82] duration metric: took 2.89703ms for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.204227    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.204253    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-120000
	I0816 06:00:57.204258    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.204263    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.204267    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.205301    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.205308    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.205312    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.205315    4495 round_trippers.go:580]     Audit-Id: 0433aad9-f030-4cdf-9ae4-34a226c8e7d5
	I0816 06:00:57.205317    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.205321    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.205325    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.205328    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.205450    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-120000","namespace":"kube-system","uid":"67f0047c-62f5-4c90-bee3-40dc18cb33e6","resourceVersion":"1285","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.mirror":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0816 06:00:57.205670    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.205677    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.205683    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.205685    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.206873    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.206881    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.206886    4495 round_trippers.go:580]     Audit-Id: 93d1a138-a7da-4caf-aff2-aab72c501a4a
	I0816 06:00:57.206890    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.206895    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.206897    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.206900    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.206903    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.207290    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.207448    4495 pod_ready.go:93] pod "kube-controller-manager-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.207456    4495 pod_ready.go:82] duration metric: took 3.223772ms for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.207463    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.207489    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:57.207494    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.207499    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.207504    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.208473    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:57.208480    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.208487    4495 round_trippers.go:580]     Audit-Id: 3c061cea-7d11-4206-b7c9-36c8140e2cd9
	I0816 06:00:57.208494    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.208498    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.208503    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.208506    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.208509    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.208608    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msbdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dff96db-7737-4e41-a130-a356e3acfd78","resourceVersion":"1263","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0816 06:00:57.208841    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.208848    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.208854    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.208858    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.209894    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.209902    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.209907    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.209912    4495 round_trippers.go:580]     Audit-Id: a45d4e26-0e90-4305-a879-912117dbd94d
	I0816 06:00:57.209919    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.209922    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.209926    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.209929    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.210022    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.210179    4495 pod_ready.go:93] pod "kube-proxy-msbdc" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.210186    4495 pod_ready.go:82] duration metric: took 2.718202ms for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.210201    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.393284    4495 request.go:632] Waited for 183.014885ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:57.393390    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:57.393402    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.393413    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.393421    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.396951    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:57.396967    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.396974    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.396979    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.396982    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.396986    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.396991    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.396995    4495 round_trippers.go:580]     Audit-Id: bbe21f71-61d2-430f-953b-e7b8093dcfd4
	I0816 06:00:57.397075    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vskxm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9b8ca3d-b5bd-4c44-8579-8b31879629ad","resourceVersion":"1104","creationTimestamp":"2024-08-16T12:56:05Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:56:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:57.595047    4495 request.go:632] Waited for 197.620166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:57.595174    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:57.595186    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.595196    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.595206    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.598193    4495 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 06:00:57.598205    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.598212    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.598219    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.598224    4495 round_trippers.go:580]     Content-Length: 210
	I0816 06:00:57.598228    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.598231    4495 round_trippers.go:580]     Audit-Id: e6c1db4f-035f-4dcb-b03e-b514ca813439
	I0816 06:00:57.598234    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.598237    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.598250    4495 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-120000-m03\" not found","reason":"NotFound","details":{"name":"multinode-120000-m03","kind":"nodes"},"code":404}
	I0816 06:00:57.598310    4495 pod_ready.go:98] node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:57.598322    4495 pod_ready.go:82] duration metric: took 388.121491ms for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:57.598331    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:57.598339    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.795148    4495 request.go:632] Waited for 196.68935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:57.795198    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:57.795206    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.795217    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.795226    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.798419    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:57.798436    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.798443    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.798447    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.798450    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.798454    4495 round_trippers.go:580]     Audit-Id: d1ed5234-af59-46c2-88e2-a2256bd63004
	I0816 06:00:57.798457    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.798465    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.798600    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x88cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"21efba47-35db-47ba-ace5-119b04bf7355","resourceVersion":"1001","creationTimestamp":"2024-08-16T12:55:15Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:55:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:57.994207    4495 request.go:632] Waited for 195.273451ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:57.994254    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:57.994262    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.994271    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.994276    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.996238    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.996252    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.996262    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.996266    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.996269    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.996273    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.996279    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:57.996286    4495 round_trippers.go:580]     Audit-Id: 2660763d-6ecb-494b-a793-54fd44f2fe86
	I0816 06:00:57.996431    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000-m02","uid":"57b3de1e-d3de-4534-9ecc-a0706c682584","resourceVersion":"1019","creationTimestamp":"2024-08-16T12:58:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_16T05_58_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:58:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3805 chars]
	I0816 06:00:57.996608    4495 pod_ready.go:93] pod "kube-proxy-x88cp" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.996617    4495 pod_ready.go:82] duration metric: took 398.279857ms for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.996624    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:58.194643    4495 request.go:632] Waited for 197.953694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:58.194771    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:58.194783    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.194797    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.194805    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.197487    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:58.197504    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.197511    4495 round_trippers.go:580]     Audit-Id: efed1839-ae04-4990-bf50-53ddeffece79
	I0816 06:00:58.197516    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.197520    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.197525    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.197528    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.197532    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.197622    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-120000","namespace":"kube-system","uid":"b8188bb8-5278-422d-86a5-19d70c796638","resourceVersion":"1291","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.mirror":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.seen":"2024-08-16T12:54:27.908480653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0816 06:00:58.393792    4495 request.go:632] Waited for 195.865798ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:58.393925    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:58.393938    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.393950    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.393958    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.397065    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.397081    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.397088    4495 round_trippers.go:580]     Audit-Id: ec49ba35-97a7-49fd-be60-931320cb5ecb
	I0816 06:00:58.397093    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.397102    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.397110    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.397118    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.397145    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.397575    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:58.397827    4495 pod_ready.go:93] pod "kube-scheduler-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:58.397840    4495 pod_ready.go:82] duration metric: took 401.218068ms for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:58.397849    4495 pod_ready.go:39] duration metric: took 14.210217723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:58.397864    4495 api_server.go:52] waiting for apiserver process to appear ...
	I0816 06:00:58.397928    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:58.410899    4495 command_runner.go:130] > 1735
	I0816 06:00:58.410960    4495 api_server.go:72] duration metric: took 30.977063274s to wait for apiserver process to appear ...
	I0816 06:00:58.410971    4495 api_server.go:88] waiting for apiserver healthz status ...
	I0816 06:00:58.410982    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:58.414427    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0816 06:00:58.414458    4495 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0816 06:00:58.414463    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.414469    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.414473    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.414934    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:58.414944    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.414958    4495 round_trippers.go:580]     Audit-Id: 9dfd7757-f7bf-4d1d-b5df-89f0fddf7c8b
	I0816 06:00:58.414979    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.414985    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.414992    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.414996    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.414999    4495 round_trippers.go:580]     Content-Length: 263
	I0816 06:00:58.415002    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.415010    4495 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0816 06:00:58.415030    4495 api_server.go:141] control plane version: v1.31.0
	I0816 06:00:58.415038    4495 api_server.go:131] duration metric: took 4.062075ms to wait for apiserver health ...
	I0816 06:00:58.415044    4495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 06:00:58.594269    4495 request.go:632] Waited for 179.184796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.594362    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.594372    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.594383    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.594391    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.597820    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.597830    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.597835    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.597839    4495 round_trippers.go:580]     Audit-Id: 53dd41ea-d419-4412-b804-a5539fe60b44
	I0816 06:00:58.597841    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.597845    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.597849    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.597851    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.599096    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89335 chars]
	I0816 06:00:58.601012    4495 system_pods.go:59] 12 kube-system pods found
	I0816 06:00:58.601023    4495 system_pods.go:61] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running
	I0816 06:00:58.601026    4495 system_pods.go:61] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running
	I0816 06:00:58.601029    4495 system_pods.go:61] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:58.601032    4495 system_pods.go:61] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:58.601037    4495 system_pods.go:61] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running
	I0816 06:00:58.601040    4495 system_pods.go:61] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running
	I0816 06:00:58.601043    4495 system_pods.go:61] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running
	I0816 06:00:58.601046    4495 system_pods.go:61] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running
	I0816 06:00:58.601048    4495 system_pods.go:61] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:58.601051    4495 system_pods.go:61] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:58.601053    4495 system_pods.go:61] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running
	I0816 06:00:58.601058    4495 system_pods.go:61] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:58.601061    4495 system_pods.go:74] duration metric: took 186.018056ms to wait for pod list to return data ...
	I0816 06:00:58.601067    4495 default_sa.go:34] waiting for default service account to be created ...
	I0816 06:00:58.795127    4495 request.go:632] Waited for 193.965542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0816 06:00:58.795226    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0816 06:00:58.795237    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.795248    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.795255    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.798470    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.798486    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.798493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.798497    4495 round_trippers.go:580]     Content-Length: 262
	I0816 06:00:58.798500    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.798503    4495 round_trippers.go:580]     Audit-Id: e8f9e561-60af-425b-a3f9-3b44a9fda1fd
	I0816 06:00:58.798506    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.798510    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.798528    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.798551    4495 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6cef175d-5a13-4cc4-a06f-ecf9ac67dfb6","resourceVersion":"361","creationTimestamp":"2024-08-16T12:54:33Z"}}]}
	I0816 06:00:58.798699    4495 default_sa.go:45] found service account: "default"
	I0816 06:00:58.798713    4495 default_sa.go:55] duration metric: took 197.643778ms for default service account to be created ...
	I0816 06:00:58.798721    4495 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 06:00:58.993428    4495 request.go:632] Waited for 194.666557ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.993458    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.993463    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.993469    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.993474    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.999368    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:58.999379    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.999384    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.999389    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:59 GMT
	I0816 06:00:58.999393    4495 round_trippers.go:580]     Audit-Id: 6e65c80c-c099-4005-ade4-9a18d234dfc8
	I0816 06:00:58.999397    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.999402    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.999406    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.999984    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89335 chars]
	I0816 06:00:59.001890    4495 system_pods.go:86] 12 kube-system pods found
	I0816 06:00:59.001900    4495 system_pods.go:89] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running
	I0816 06:00:59.001904    4495 system_pods.go:89] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running
	I0816 06:00:59.001908    4495 system_pods.go:89] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:59.001912    4495 system_pods.go:89] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:59.001915    4495 system_pods.go:89] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running
	I0816 06:00:59.001918    4495 system_pods.go:89] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running
	I0816 06:00:59.001922    4495 system_pods.go:89] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running
	I0816 06:00:59.001925    4495 system_pods.go:89] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running
	I0816 06:00:59.001928    4495 system_pods.go:89] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:59.001931    4495 system_pods.go:89] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:59.001934    4495 system_pods.go:89] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running
	I0816 06:00:59.001939    4495 system_pods.go:89] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:59.001944    4495 system_pods.go:126] duration metric: took 203.222577ms to wait for k8s-apps to be running ...
	I0816 06:00:59.001953    4495 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 06:00:59.002005    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 06:00:59.013228    4495 system_svc.go:56] duration metric: took 11.274015ms WaitForService to wait for kubelet
	I0816 06:00:59.013241    4495 kubeadm.go:582] duration metric: took 31.579356247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:00:59.013257    4495 node_conditions.go:102] verifying NodePressure condition ...
	I0816 06:00:59.193988    4495 request.go:632] Waited for 180.658554ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:59.194062    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:59.194070    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:59.194081    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:59.194088    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:59.196901    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:59.196917    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:59.196924    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:59.196929    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:59.196956    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:59.196965    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:59.196971    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:59 GMT
	I0816 06:00:59.196975    4495 round_trippers.go:580]     Audit-Id: aba56bfc-9e40-433b-a514-9d8e27ae8f86
	I0816 06:00:59.197113    4495 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10017 chars]
	I0816 06:00:59.197496    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:59.197507    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:59.197516    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:59.197521    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:59.197526    4495 node_conditions.go:105] duration metric: took 184.267075ms to run NodePressure ...
	I0816 06:00:59.197536    4495 start.go:241] waiting for startup goroutines ...
	I0816 06:00:59.197543    4495 start.go:246] waiting for cluster config update ...
	I0816 06:00:59.197552    4495 start.go:255] writing updated cluster config ...
	I0816 06:00:59.219582    4495 out.go:201] 
	I0816 06:00:59.241519    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:59.241632    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.264316    4495 out.go:177] * Starting "multinode-120000-m02" worker node in "multinode-120000" cluster
	I0816 06:00:59.306184    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:59.306218    4495 cache.go:56] Caching tarball of preloaded images
	I0816 06:00:59.306418    4495 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:00:59.306436    4495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:00:59.306546    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.307365    4495 start.go:360] acquireMachinesLock for multinode-120000-m02: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:00:59.307481    4495 start.go:364] duration metric: took 91.575µs to acquireMachinesLock for "multinode-120000-m02"
	I0816 06:00:59.307508    4495 start.go:96] Skipping create...Using existing machine configuration
	I0816 06:00:59.307515    4495 fix.go:54] fixHost starting: m02
	I0816 06:00:59.307945    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:59.307981    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:59.317188    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53314
	I0816 06:00:59.317569    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:59.317913    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:59.317923    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:59.318122    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:59.318232    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:00:59.318317    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetState
	I0816 06:00:59.318397    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.318479    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4443
	I0816 06:00:59.319411    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid 4443 missing from process table
	I0816 06:00:59.319466    4495 fix.go:112] recreateIfNeeded on multinode-120000-m02: state=Stopped err=<nil>
	I0816 06:00:59.319507    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	W0816 06:00:59.319598    4495 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 06:00:59.342208    4495 out.go:177] * Restarting existing hyperkit VM for "multinode-120000-m02" ...
	I0816 06:00:59.363128    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .Start
	I0816 06:00:59.363436    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.363488    4495 main.go:141] libmachine: (multinode-120000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid
	I0816 06:00:59.365357    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid 4443 missing from process table
	I0816 06:00:59.365375    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | pid 4443 is in state "Stopped"
	I0816 06:00:59.365403    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid...
	I0816 06:00:59.365644    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Using UUID ee85a2c3-93d0-4de0-ac93-052eb9962a60
	I0816 06:00:59.392192    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Generated MAC fa:8b:6e:be:7a:d1
	I0816 06:00:59.392215    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000
	I0816 06:00:59.392328    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee85a2c3-93d0-4de0-ac93-052eb9962a60", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0816 06:00:59.392369    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee85a2c3-93d0-4de0-ac93-052eb9962a60", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0816 06:00:59.392424    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ee85a2c3-93d0-4de0-ac93-052eb9962a60", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/multinode-120000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage,/Users/j
enkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"}
	I0816 06:00:59.392465    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ee85a2c3-93d0-4de0-ac93-052eb9962a60 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/multinode-120000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/mult
inode-120000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"
	I0816 06:00:59.392475    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:00:59.393842    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Pid is 4816
	I0816 06:00:59.394301    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Attempt 0
	I0816 06:00:59.394311    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.394408    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4816
	I0816 06:00:59.396517    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Searching for fa:8b:6e:be:7a:d1 in /var/db/dhcpd_leases ...
	I0816 06:00:59.396631    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0816 06:00:59.396665    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:00:59.396689    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:00:59.396706    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66c09e76}
	I0816 06:00:59.396723    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Found match: fa:8b:6e:be:7a:d1
	I0816 06:00:59.396735    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | IP: 192.169.0.15
	I0816 06:00:59.396766    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetConfigRaw
	I0816 06:00:59.397508    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:00:59.397686    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.398150    4495 machine.go:93] provisionDockerMachine start ...
	I0816 06:00:59.398161    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:00:59.398280    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:00:59.398384    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:00:59.398487    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:00:59.398584    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:00:59.398663    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:00:59.398779    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:59.398936    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:00:59.398947    4495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 06:00:59.401869    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:00:59.409985    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:00:59.410992    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:59.411010    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:59.411040    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:59.411055    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:59.797737    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:00:59.797763    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:00:59.912467    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:59.912494    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:59.912503    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:59.912510    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:59.913309    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:00:59.913319    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:01:05.478997    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:01:05.479099    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:01:05.479115    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:01:05.504217    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:01:10.467844    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 06:01:10.467861    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.467994    4495 buildroot.go:166] provisioning hostname "multinode-120000-m02"
	I0816 06:01:10.468006    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.468090    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.468183    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.468288    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.468377    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.468462    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.468595    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.468749    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.468761    4495 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-120000-m02 && echo "multinode-120000-m02" | sudo tee /etc/hostname
	I0816 06:01:10.542740    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-120000-m02
	
	I0816 06:01:10.542760    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.542891    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.542994    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.543103    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.543188    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.543325    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.543468    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.543480    4495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-120000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-120000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-120000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 06:01:10.613856    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:01:10.613871    4495 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 06:01:10.613888    4495 buildroot.go:174] setting up certificates
	I0816 06:01:10.613895    4495 provision.go:84] configureAuth start
	I0816 06:01:10.613902    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.614033    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:01:10.614135    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.614217    4495 provision.go:143] copyHostCerts
	I0816 06:01:10.614244    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:01:10.614309    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 06:01:10.614321    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:01:10.614523    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 06:01:10.614724    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:01:10.614764    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 06:01:10.614769    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:01:10.614850    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 06:01:10.614989    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:01:10.615028    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 06:01:10.615033    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:01:10.615116    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 06:01:10.615260    4495 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.multinode-120000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-120000-m02]
	I0816 06:01:10.752465    4495 provision.go:177] copyRemoteCerts
	I0816 06:01:10.752518    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 06:01:10.752532    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.752646    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.752746    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.752838    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.752935    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:10.792382    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 06:01:10.792452    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 06:01:10.811166    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 06:01:10.811231    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0816 06:01:10.830044    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 06:01:10.830105    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 06:01:10.848831    4495 provision.go:87] duration metric: took 234.932882ms to configureAuth
	I0816 06:01:10.848843    4495 buildroot.go:189] setting minikube options for container-runtime
	I0816 06:01:10.849004    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:01:10.849017    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:10.849142    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.849233    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.849314    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.849399    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.849473    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.849586    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.849717    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.849725    4495 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 06:01:10.913790    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 06:01:10.913802    4495 buildroot.go:70] root file system type: tmpfs
	I0816 06:01:10.913891    4495 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 06:01:10.913901    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.914033    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.914130    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.914230    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.914313    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.914447    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.914588    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.914635    4495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 06:01:10.989565    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 06:01:10.989584    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.989736    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.989836    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.989914    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.990002    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.990140    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.990292    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.990304    4495 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 06:01:12.597650    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 06:01:12.597675    4495 machine.go:96] duration metric: took 13.199777318s to provisionDockerMachine
	I0816 06:01:12.597700    4495 start.go:293] postStartSetup for "multinode-120000-m02" (driver="hyperkit")
	I0816 06:01:12.597716    4495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 06:01:12.597730    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.597918    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 06:01:12.597930    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.598026    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.598112    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.598198    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.598279    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.641252    4495 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 06:01:12.645893    4495 command_runner.go:130] > NAME=Buildroot
	I0816 06:01:12.645902    4495 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 06:01:12.645906    4495 command_runner.go:130] > ID=buildroot
	I0816 06:01:12.645910    4495 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 06:01:12.645914    4495 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 06:01:12.646115    4495 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 06:01:12.646129    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 06:01:12.646249    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 06:01:12.646427    4495 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 06:01:12.646433    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 06:01:12.646648    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 06:01:12.657358    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:01:12.689642    4495 start.go:296] duration metric: took 91.931255ms for postStartSetup
	I0816 06:01:12.689664    4495 fix.go:56] duration metric: took 13.382413227s for fixHost
	I0816 06:01:12.689724    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.689854    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.689949    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.690035    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.690112    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.690231    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:12.690366    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:12.690374    4495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 06:01:12.754485    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813272.898141464
	
	I0816 06:01:12.754498    4495 fix.go:216] guest clock: 1723813272.898141464
	I0816 06:01:12.754503    4495 fix.go:229] Guest: 2024-08-16 06:01:12.898141464 -0700 PDT Remote: 2024-08-16 06:01:12.68967 -0700 PDT m=+72.289020142 (delta=208.471464ms)
	I0816 06:01:12.754516    4495 fix.go:200] guest clock delta is within tolerance: 208.471464ms
	I0816 06:01:12.754519    4495 start.go:83] releasing machines lock for "multinode-120000-m02", held for 13.447292745s
	I0816 06:01:12.754536    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.754672    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:01:12.775333    4495 out.go:177] * Found network options:
	I0816 06:01:12.796965    4495 out.go:177]   - NO_PROXY=192.169.0.14
	W0816 06:01:12.818178    4495 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 06:01:12.818216    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819064    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819327    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819483    4495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 06:01:12.819528    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	W0816 06:01:12.819581    4495 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 06:01:12.819704    4495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 06:01:12.819720    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.819767    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.819917    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.819961    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.820120    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.820160    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.820334    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.820350    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.820527    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.856617    4495 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 06:01:12.856639    4495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 06:01:12.856693    4495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 06:01:12.899107    4495 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 06:01:12.899982    4495 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0816 06:01:12.900016    4495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 06:01:12.900027    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:01:12.900139    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:01:12.916181    4495 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0816 06:01:12.916362    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 06:01:12.924577    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 06:01:12.932819    4495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 06:01:12.932863    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 06:01:12.941195    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:01:12.949373    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 06:01:12.957522    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:01:12.965815    4495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 06:01:12.974440    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 06:01:12.982832    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 06:01:12.991213    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 06:01:12.999286    4495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 06:01:13.006499    4495 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 06:01:13.006629    4495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 06:01:13.013965    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:01:13.112853    4495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 06:01:13.132768    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:01:13.132836    4495 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 06:01:13.149276    4495 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0816 06:01:13.150775    4495 command_runner.go:130] > [Unit]
	I0816 06:01:13.150786    4495 command_runner.go:130] > Description=Docker Application Container Engine
	I0816 06:01:13.150792    4495 command_runner.go:130] > Documentation=https://docs.docker.com
	I0816 06:01:13.150797    4495 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0816 06:01:13.150802    4495 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0816 06:01:13.150812    4495 command_runner.go:130] > StartLimitBurst=3
	I0816 06:01:13.150816    4495 command_runner.go:130] > StartLimitIntervalSec=60
	I0816 06:01:13.150820    4495 command_runner.go:130] > [Service]
	I0816 06:01:13.150823    4495 command_runner.go:130] > Type=notify
	I0816 06:01:13.150826    4495 command_runner.go:130] > Restart=on-failure
	I0816 06:01:13.150832    4495 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0816 06:01:13.150837    4495 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0816 06:01:13.150847    4495 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0816 06:01:13.150854    4495 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0816 06:01:13.150859    4495 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0816 06:01:13.150866    4495 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0816 06:01:13.150871    4495 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0816 06:01:13.150878    4495 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0816 06:01:13.150890    4495 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0816 06:01:13.150895    4495 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0816 06:01:13.150899    4495 command_runner.go:130] > ExecStart=
	I0816 06:01:13.150911    4495 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0816 06:01:13.150921    4495 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0816 06:01:13.150929    4495 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0816 06:01:13.150935    4495 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0816 06:01:13.150940    4495 command_runner.go:130] > LimitNOFILE=infinity
	I0816 06:01:13.150943    4495 command_runner.go:130] > LimitNPROC=infinity
	I0816 06:01:13.150948    4495 command_runner.go:130] > LimitCORE=infinity
	I0816 06:01:13.150954    4495 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0816 06:01:13.150959    4495 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0816 06:01:13.150963    4495 command_runner.go:130] > TasksMax=infinity
	I0816 06:01:13.150968    4495 command_runner.go:130] > TimeoutStartSec=0
	I0816 06:01:13.150974    4495 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0816 06:01:13.150979    4495 command_runner.go:130] > Delegate=yes
	I0816 06:01:13.150990    4495 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0816 06:01:13.151009    4495 command_runner.go:130] > KillMode=process
	I0816 06:01:13.151017    4495 command_runner.go:130] > [Install]
	I0816 06:01:13.151023    4495 command_runner.go:130] > WantedBy=multi-user.target
	I0816 06:01:13.151101    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:01:13.166056    4495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 06:01:13.185416    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:01:13.196151    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:01:13.206342    4495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 06:01:13.233286    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:01:13.244370    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:01:13.258918    4495 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0816 06:01:13.259169    4495 ssh_runner.go:195] Run: which cri-dockerd
	I0816 06:01:13.261923    4495 command_runner.go:130] > /usr/bin/cri-dockerd
	I0816 06:01:13.262087    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 06:01:13.269387    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 06:01:13.282813    4495 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 06:01:13.380083    4495 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 06:01:13.480701    4495 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 06:01:13.480726    4495 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 06:01:13.495596    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:01:13.600122    4495 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 06:02:14.623949    4495 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0816 06:02:14.623964    4495 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0816 06:02:14.623974    4495 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.025042574s)
	I0816 06:02:14.624046    4495 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 06:02:14.633055    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0816 06:02:14.633068    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445128662Z" level=info msg="Starting up"
	I0816 06:02:14.633077    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445576424Z" level=info msg="containerd not running, starting managed containerd"
	I0816 06:02:14.633091    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.446087902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	I0816 06:02:14.633100    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.464562092Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0816 06:02:14.633110    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479466694Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0816 06:02:14.633122    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479531751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633131    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479594404Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0816 06:02:14.633143    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479629031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633154    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479842292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633164    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479889532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633183    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480015247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633208    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480066795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633226    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480105284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633237    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480134704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633249    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480284892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633260    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480518152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633274    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482158345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633284    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482227762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633310    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482355246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633322    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482401189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0816 06:02:14.633334    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482551004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0816 06:02:14.633342    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482610366Z" level=info msg="metadata content store policy set" policy=shared
	I0816 06:02:14.633350    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484743898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0816 06:02:14.633359    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484842901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0816 06:02:14.633368    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484892400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0816 06:02:14.633378    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484992184Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0816 06:02:14.633387    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485035944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0816 06:02:14.633396    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485102391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0816 06:02:14.633404    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485716230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0816 06:02:14.633413    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485838842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0816 06:02:14.633424    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485887463Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0816 06:02:14.633433    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485941187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0816 06:02:14.633444    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485983421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633456    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486017407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633467    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486071726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633476    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486113872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633485    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486150386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633495    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486191889Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633571    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486229406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633584    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486263661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633593    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486305970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633602    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486413763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633611    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486510443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633622    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486666027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633631    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486744588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633640    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486783463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633650    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486821985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633659    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486859811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633668    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486892478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633678    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486925903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633687    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486956569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633696    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486987244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633705    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487017252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633714    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487049437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0816 06:02:14.633723    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487086389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633732    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487117852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633741    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487147113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633750    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487232935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0816 06:02:14.633762    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487282108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0816 06:02:14.633772    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487315003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633845    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487367683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0816 06:02:14.633858    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487403326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633868    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487433733Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0816 06:02:14.633876    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487462518Z" level=info msg="NRI interface is disabled by configuration."
	I0816 06:02:14.633885    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487688948Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0816 06:02:14.633893    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487784884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0816 06:02:14.633902    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487850681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0816 06:02:14.633910    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487886542Z" level=info msg="containerd successfully booted in 0.024053s"
	I0816 06:02:14.633918    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.473777953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0816 06:02:14.633926    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.495069807Z" level=info msg="Loading containers: start."
	I0816 06:02:14.633947    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.607134105Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0816 06:02:14.633958    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.664329023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0816 06:02:14.633971    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709750511Z" level=warning msg="error locating sandbox id be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c: sandbox be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c not found"
	I0816 06:02:14.633986    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709809833Z" level=warning msg="error locating sandbox id 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f: sandbox 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f not found"
	I0816 06:02:14.633994    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709978520Z" level=info msg="Loading containers: done."
	I0816 06:02:14.634003    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.716985320Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0816 06:02:14.634011    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.717159977Z" level=info msg="Daemon has completed initialization"
	I0816 06:02:14.634020    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738467303Z" level=info msg="API listen on /var/run/docker.sock"
	I0816 06:02:14.634026    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 systemd[1]: Started Docker Application Container Engine.
	I0816 06:02:14.634036    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738583174Z" level=info msg="API listen on [::]:2376"
	I0816 06:02:14.634045    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.756208041Z" level=info msg="Processing signal 'terminated'"
	I0816 06:02:14.634088    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757227405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0816 06:02:14.634098    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757420422Z" level=info msg="Daemon shutdown complete"
	I0816 06:02:14.634107    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757484340Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0816 06:02:14.634117    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757573969Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0816 06:02:14.634124    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0816 06:02:14.634130    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0816 06:02:14.634136    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0816 06:02:14.634143    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0816 06:02:14.634153    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 dockerd[910]: time="2024-08-16T13:01:14.796343068Z" level=info msg="Starting up"
	I0816 06:02:14.634163    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0816 06:02:14.634170    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0816 06:02:14.634179    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0816 06:02:14.634186    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0816 06:02:14.660433    4495 out.go:201] 
	W0816 06:02:14.680488    4495 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 13:01:11 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445128662Z" level=info msg="Starting up"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445576424Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.446087902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.464562092Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479466694Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479531751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479594404Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479629031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479842292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479889532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480015247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480066795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480105284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480134704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480284892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480518152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482158345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482227762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482355246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482401189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482551004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482610366Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484743898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484842901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484892400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484992184Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485035944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485102391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485716230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485838842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485887463Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485941187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485983421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486017407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486071726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486113872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486150386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486191889Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486229406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486263661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486305970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486413763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486510443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486666027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486744588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486783463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486821985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486859811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486892478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486925903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486956569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486987244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487017252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487049437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487086389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487117852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487147113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487232935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487282108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487315003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487367683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487403326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487433733Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487462518Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487688948Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487784884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487850681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487886542Z" level=info msg="containerd successfully booted in 0.024053s"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.473777953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.495069807Z" level=info msg="Loading containers: start."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.607134105Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.664329023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709750511Z" level=warning msg="error locating sandbox id be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c: sandbox be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709809833Z" level=warning msg="error locating sandbox id 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f: sandbox 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709978520Z" level=info msg="Loading containers: done."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.716985320Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.717159977Z" level=info msg="Daemon has completed initialization"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738467303Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 13:01:12 multinode-120000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738583174Z" level=info msg="API listen on [::]:2376"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.756208041Z" level=info msg="Processing signal 'terminated'"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757227405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757420422Z" level=info msg="Daemon shutdown complete"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757484340Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757573969Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 13:01:13 multinode-120000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 dockerd[910]: time="2024-08-16T13:01:14.796343068Z" level=info msg="Starting up"
	Aug 16 13:02:14 multinode-120000-m02 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 13:01:11 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445128662Z" level=info msg="Starting up"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445576424Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.446087902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.464562092Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479466694Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479531751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479594404Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479629031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479842292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479889532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480015247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480066795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480105284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480134704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480284892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480518152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482158345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482227762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482355246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482401189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482551004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482610366Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484743898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484842901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484892400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484992184Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485035944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485102391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485716230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485838842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485887463Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485941187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485983421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486017407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486071726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486113872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486150386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486191889Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486229406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486263661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486305970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486413763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486510443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486666027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486744588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486783463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486821985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486859811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486892478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486925903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486956569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486987244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487017252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487049437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487086389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487117852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487147113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487232935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487282108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487315003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487367683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487403326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487433733Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487462518Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487688948Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487784884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487850681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487886542Z" level=info msg="containerd successfully booted in 0.024053s"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.473777953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.495069807Z" level=info msg="Loading containers: start."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.607134105Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.664329023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709750511Z" level=warning msg="error locating sandbox id be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c: sandbox be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709809833Z" level=warning msg="error locating sandbox id 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f: sandbox 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709978520Z" level=info msg="Loading containers: done."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.716985320Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.717159977Z" level=info msg="Daemon has completed initialization"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738467303Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 13:01:12 multinode-120000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738583174Z" level=info msg="API listen on [::]:2376"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.756208041Z" level=info msg="Processing signal 'terminated'"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757227405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757420422Z" level=info msg="Daemon shutdown complete"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757484340Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757573969Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 13:01:13 multinode-120000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 dockerd[910]: time="2024-08-16T13:01:14.796343068Z" level=info msg="Starting up"
	Aug 16 13:02:14 multinode-120000-m02 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 06:02:14.680575    4495 out.go:270] * 
	* 
	W0816 06:02:14.681774    4495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:02:14.723648    4495 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-120000 -n multinode-120000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-120000 logs -n 25: (2.867081945s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-120000 cp multinode-120000-m02:/home/docker/cp-test.txt                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000:/home/docker/cp-test_multinode-120000-m02_multinode-120000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n multinode-120000 sudo cat                                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | /home/docker/cp-test_multinode-120000-m02_multinode-120000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-120000 cp multinode-120000-m02:/home/docker/cp-test.txt                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03:/home/docker/cp-test_multinode-120000-m02_multinode-120000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n multinode-120000-m03 sudo cat                                                                       | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | /home/docker/cp-test_multinode-120000-m02_multinode-120000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-120000 cp testdata/cp-test.txt                                                                                    | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4207728907/001/cp-test_multinode-120000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000:/home/docker/cp-test_multinode-120000-m03_multinode-120000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n multinode-120000 sudo cat                                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | /home/docker/cp-test_multinode-120000-m03_multinode-120000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt                                                           | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m02:/home/docker/cp-test_multinode-120000-m03_multinode-120000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n                                                                                                     | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | multinode-120000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-120000 ssh -n multinode-120000-m02 sudo cat                                                                       | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	|         | /home/docker/cp-test_multinode-120000-m03_multinode-120000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-120000 node stop m03                                                                                              | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:56 PDT |
	| node    | multinode-120000 node start                                                                                                 | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:56 PDT | 16 Aug 24 05:57 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-120000                                                                                                    | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:57 PDT |                     |
	| stop    | -p multinode-120000                                                                                                         | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:57 PDT | 16 Aug 24 05:57 PDT |
	| start   | -p multinode-120000                                                                                                         | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:57 PDT | 16 Aug 24 05:59 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-120000                                                                                                    | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:59 PDT |                     |
	| node    | multinode-120000 node delete                                                                                                | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:59 PDT | 16 Aug 24 05:59 PDT |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-120000 stop                                                                                                       | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 05:59 PDT | 16 Aug 24 06:00 PDT |
	| start   | -p multinode-120000                                                                                                         | multinode-120000 | jenkins | v1.33.1 | 16 Aug 24 06:00 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 06:00:00
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 06:00:00.437269    4495 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:00:00.437534    4495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.437540    4495 out.go:358] Setting ErrFile to fd 2...
	I0816 06:00:00.437546    4495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.437715    4495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:00:00.439130    4495 out.go:352] Setting JSON to false
	I0816 06:00:00.461246    4495 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2978,"bootTime":1723810222,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:00:00.461338    4495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:00:00.484085    4495 out.go:177] * [multinode-120000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:00:00.526534    4495 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:00:00.526607    4495 notify.go:220] Checking for updates...
	I0816 06:00:00.557254    4495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:00.578492    4495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:00:00.600661    4495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:00:00.648199    4495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:00:00.669461    4495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:00:00.691247    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:00.691911    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.692016    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.701577    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53279
	I0816 06:00:00.701934    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.702350    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.702361    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.702577    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.702697    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.702916    4495 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:00:00.703168    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.703191    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.711617    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53281
	I0816 06:00:00.711932    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.712287    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.712302    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.712518    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.712671    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.741305    4495 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 06:00:00.783251    4495 start.go:297] selected driver: hyperkit
	I0816 06:00:00.783283    4495 start.go:901] validating driver "hyperkit" against &{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:00.783519    4495 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:00:00.783713    4495 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:00:00.783913    4495 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:00:00.793394    4495 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:00:00.797095    4495 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.797118    4495 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:00:00.799739    4495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:00:00.799780    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:00.799788    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:00.799858    4495 start.go:340] cluster config:
	{Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plu
gin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:00.799970    4495 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:00:00.821048    4495 out.go:177] * Starting "multinode-120000" primary control-plane node in "multinode-120000" cluster
	I0816 06:00:00.842239    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:00.842333    4495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:00:00.842355    4495 cache.go:56] Caching tarball of preloaded images
	I0816 06:00:00.842583    4495 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:00:00.842601    4495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:00:00.842770    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:00.843697    4495 start.go:360] acquireMachinesLock for multinode-120000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:00:00.843818    4495 start.go:364] duration metric: took 95.678µs to acquireMachinesLock for "multinode-120000"
	I0816 06:00:00.843855    4495 start.go:96] Skipping create...Using existing machine configuration
	I0816 06:00:00.843873    4495 fix.go:54] fixHost starting: 
	I0816 06:00:00.844306    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.844337    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.853481    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53283
	I0816 06:00:00.853843    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.854209    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.854222    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.854433    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.854578    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:00.854677    4495 main.go:141] libmachine: (multinode-120000) Calling .GetState
	I0816 06:00:00.854759    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.854838    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4436
	I0816 06:00:00.855754    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid 4436 missing from process table
	I0816 06:00:00.855779    4495 fix.go:112] recreateIfNeeded on multinode-120000: state=Stopped err=<nil>
	I0816 06:00:00.855796    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	W0816 06:00:00.855889    4495 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 06:00:00.877085    4495 out.go:177] * Restarting existing hyperkit VM for "multinode-120000" ...
	I0816 06:00:00.897869    4495 main.go:141] libmachine: (multinode-120000) Calling .Start
	I0816 06:00:00.898015    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.898040    4495 main.go:141] libmachine: (multinode-120000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid
	I0816 06:00:00.898985    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid 4436 missing from process table
	I0816 06:00:00.899002    4495 main.go:141] libmachine: (multinode-120000) DBG | pid 4436 is in state "Stopped"
	I0816 06:00:00.899011    4495 main.go:141] libmachine: (multinode-120000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid...
	I0816 06:00:00.899178    4495 main.go:141] libmachine: (multinode-120000) DBG | Using UUID 3c9151c1-070c-42c3-931c-22df86688b90
	I0816 06:00:01.008020    4495 main.go:141] libmachine: (multinode-120000) DBG | Generated MAC fa:4b:15:6b:d9:84
	I0816 06:00:01.008043    4495 main.go:141] libmachine: (multinode-120000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000
	I0816 06:00:01.008169    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3c9151c1-070c-42c3-931c-22df86688b90", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a68a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0816 06:00:01.008196    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3c9151c1-070c-42c3-931c-22df86688b90", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a68a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0816 06:00:01.008239    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3c9151c1-070c-42c3-931c-22df86688b90", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/multinode-120000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage,/Users/jenkins/minikube-integration/1942
3-1009/.minikube/machines/multinode-120000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"}
	I0816 06:00:01.008264    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3c9151c1-070c-42c3-931c-22df86688b90 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/multinode-120000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"
	I0816 06:00:01.008274    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:00:01.009750    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 DEBUG: hyperkit: Pid is 4510
	I0816 06:00:01.010226    4495 main.go:141] libmachine: (multinode-120000) DBG | Attempt 0
	I0816 06:00:01.010247    4495 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:01.010362    4495 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4510
	I0816 06:00:01.012057    4495 main.go:141] libmachine: (multinode-120000) DBG | Searching for fa:4b:15:6b:d9:84 in /var/db/dhcpd_leases ...
	I0816 06:00:01.012159    4495 main.go:141] libmachine: (multinode-120000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0816 06:00:01.012173    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:00:01.012186    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66c09e76}
	I0816 06:00:01.012210    4495 main.go:141] libmachine: (multinode-120000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09e4c}
	I0816 06:00:01.012220    4495 main.go:141] libmachine: (multinode-120000) DBG | Found match: fa:4b:15:6b:d9:84
	I0816 06:00:01.012227    4495 main.go:141] libmachine: (multinode-120000) DBG | IP: 192.169.0.14
	I0816 06:00:01.012281    4495 main.go:141] libmachine: (multinode-120000) Calling .GetConfigRaw
	I0816 06:00:01.012949    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:01.013136    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:01.013567    4495 machine.go:93] provisionDockerMachine start ...
	I0816 06:00:01.013578    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:01.013725    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:01.013820    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:01.013903    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:01.013995    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:01.014112    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:01.014241    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:01.014512    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:01.014525    4495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 06:00:01.017464    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:00:01.068185    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:00:01.068876    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:01.068897    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:01.068911    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:01.068919    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:01.454021    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:00:01.454037    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:00:01.568953    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:01.568980    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:01.568990    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:01.568997    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:01.569812    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:00:01.569825    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:00:07.168469    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 06:00:07.168487    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 06:00:07.168497    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 06:00:07.193237    4495 main.go:141] libmachine: (multinode-120000) DBG | 2024/08/16 06:00:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 06:00:12.079930    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 06:00:12.079944    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.080122    4495 buildroot.go:166] provisioning hostname "multinode-120000"
	I0816 06:00:12.080134    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.080229    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.080326    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.080409    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.080501    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.080601    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.080741    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.080956    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.080964    4495 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-120000 && echo "multinode-120000" | sudo tee /etc/hostname
	I0816 06:00:12.148519    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-120000
	
	I0816 06:00:12.148538    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.148669    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.148753    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.148849    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.148961    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.149085    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.149226    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.149238    4495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-120000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-120000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-120000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 06:00:12.211112    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:00:12.211132    4495 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 06:00:12.211152    4495 buildroot.go:174] setting up certificates
	I0816 06:00:12.211162    4495 provision.go:84] configureAuth start
	I0816 06:00:12.211169    4495 main.go:141] libmachine: (multinode-120000) Calling .GetMachineName
	I0816 06:00:12.211306    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:12.211422    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.211551    4495 provision.go:143] copyHostCerts
	I0816 06:00:12.211586    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:00:12.211655    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 06:00:12.211663    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:00:12.211800    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 06:00:12.211998    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:00:12.212038    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 06:00:12.212043    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:00:12.212127    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 06:00:12.212273    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:00:12.212325    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 06:00:12.212330    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:00:12.212405    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 06:00:12.212550    4495 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.multinode-120000 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-120000]
	I0816 06:00:12.269903    4495 provision.go:177] copyRemoteCerts
	I0816 06:00:12.269960    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 06:00:12.269974    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.270075    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.270176    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.270269    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.270367    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:12.306571    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 06:00:12.306648    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 06:00:12.326772    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 06:00:12.326832    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0816 06:00:12.347062    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 06:00:12.347120    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 06:00:12.366749    4495 provision.go:87] duration metric: took 155.576654ms to configureAuth
	I0816 06:00:12.366761    4495 buildroot.go:189] setting minikube options for container-runtime
	I0816 06:00:12.366918    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:12.366931    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:12.367058    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.367148    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.367235    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.367325    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.367420    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.367536    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.367659    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.367666    4495 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 06:00:12.425224    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 06:00:12.425241    4495 buildroot.go:70] root file system type: tmpfs
	I0816 06:00:12.425311    4495 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 06:00:12.425325    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.425467    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.425560    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.425665    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.425755    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.425903    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.426047    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.426095    4495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 06:00:12.493904    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 06:00:12.493926    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:12.494059    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:12.494132    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.494215    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:12.494295    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:12.494395    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:12.494544    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:12.494556    4495 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 06:00:14.152169    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 06:00:14.152193    4495 machine.go:96] duration metric: took 13.138876329s to provisionDockerMachine
	I0816 06:00:14.152206    4495 start.go:293] postStartSetup for "multinode-120000" (driver="hyperkit")
	I0816 06:00:14.152214    4495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 06:00:14.152227    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.152413    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 06:00:14.152425    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.152521    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.152608    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.152712    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.152802    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.198445    4495 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 06:00:14.202279    4495 command_runner.go:130] > NAME=Buildroot
	I0816 06:00:14.202290    4495 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 06:00:14.202296    4495 command_runner.go:130] > ID=buildroot
	I0816 06:00:14.202301    4495 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 06:00:14.202308    4495 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 06:00:14.202366    4495 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 06:00:14.202381    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 06:00:14.202494    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 06:00:14.202686    4495 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 06:00:14.202692    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 06:00:14.202900    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 06:00:14.213199    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:00:14.244608    4495 start.go:296] duration metric: took 92.394783ms for postStartSetup
	I0816 06:00:14.244634    4495 fix.go:56] duration metric: took 13.401033545s for fixHost
	I0816 06:00:14.244647    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.244776    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.244886    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.244969    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.245059    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.245183    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:14.245322    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0816 06:00:14.245329    4495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 06:00:14.304249    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813214.442767633
	
	I0816 06:00:14.304262    4495 fix.go:216] guest clock: 1723813214.442767633
	I0816 06:00:14.304268    4495 fix.go:229] Guest: 2024-08-16 06:00:14.442767633 -0700 PDT Remote: 2024-08-16 06:00:14.244637 -0700 PDT m=+13.842835509 (delta=198.130633ms)
	I0816 06:00:14.304285    4495 fix.go:200] guest clock delta is within tolerance: 198.130633ms
	I0816 06:00:14.304290    4495 start.go:83] releasing machines lock for "multinode-120000", held for 13.460725269s
	I0816 06:00:14.304308    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.304440    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:14.304545    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.304904    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.305007    4495 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 06:00:14.305080    4495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 06:00:14.305120    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.305151    4495 ssh_runner.go:195] Run: cat /version.json
	I0816 06:00:14.305162    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 06:00:14.305236    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.305272    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 06:00:14.305347    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.305368    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 06:00:14.305450    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.305467    4495 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 06:00:14.305538    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.305568    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 06:00:14.338164    4495 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0816 06:00:14.338478    4495 ssh_runner.go:195] Run: systemctl --version
	I0816 06:00:14.380328    4495 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 06:00:14.381111    4495 command_runner.go:130] > systemd 252 (252)
	I0816 06:00:14.381147    4495 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0816 06:00:14.381279    4495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 06:00:14.386544    4495 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 06:00:14.386566    4495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 06:00:14.386605    4495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 06:00:14.399180    4495 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0816 06:00:14.399210    4495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 06:00:14.399216    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:00:14.399316    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:00:14.414316    4495 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0816 06:00:14.414565    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 06:00:14.423600    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 06:00:14.432546    4495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 06:00:14.432587    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 06:00:14.441434    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:00:14.450288    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 06:00:14.458973    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:00:14.467883    4495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 06:00:14.476947    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 06:00:14.485850    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 06:00:14.494622    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 06:00:14.503560    4495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 06:00:14.511413    4495 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 06:00:14.511568    4495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 06:00:14.519551    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:14.617061    4495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 06:00:14.636046    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:00:14.636127    4495 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 06:00:14.649044    4495 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0816 06:00:14.649494    4495 command_runner.go:130] > [Unit]
	I0816 06:00:14.649504    4495 command_runner.go:130] > Description=Docker Application Container Engine
	I0816 06:00:14.649508    4495 command_runner.go:130] > Documentation=https://docs.docker.com
	I0816 06:00:14.649516    4495 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0816 06:00:14.649523    4495 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0816 06:00:14.649535    4495 command_runner.go:130] > StartLimitBurst=3
	I0816 06:00:14.649542    4495 command_runner.go:130] > StartLimitIntervalSec=60
	I0816 06:00:14.649546    4495 command_runner.go:130] > [Service]
	I0816 06:00:14.649550    4495 command_runner.go:130] > Type=notify
	I0816 06:00:14.649554    4495 command_runner.go:130] > Restart=on-failure
	I0816 06:00:14.649565    4495 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0816 06:00:14.649581    4495 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0816 06:00:14.649588    4495 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0816 06:00:14.649594    4495 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0816 06:00:14.649600    4495 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0816 06:00:14.649606    4495 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0816 06:00:14.649612    4495 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0816 06:00:14.649622    4495 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0816 06:00:14.649628    4495 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0816 06:00:14.649633    4495 command_runner.go:130] > ExecStart=
	I0816 06:00:14.649645    4495 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0816 06:00:14.649650    4495 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0816 06:00:14.649656    4495 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0816 06:00:14.649662    4495 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0816 06:00:14.649666    4495 command_runner.go:130] > LimitNOFILE=infinity
	I0816 06:00:14.649669    4495 command_runner.go:130] > LimitNPROC=infinity
	I0816 06:00:14.649672    4495 command_runner.go:130] > LimitCORE=infinity
	I0816 06:00:14.649695    4495 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0816 06:00:14.649703    4495 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0816 06:00:14.649712    4495 command_runner.go:130] > TasksMax=infinity
	I0816 06:00:14.649716    4495 command_runner.go:130] > TimeoutStartSec=0
	I0816 06:00:14.649727    4495 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0816 06:00:14.649731    4495 command_runner.go:130] > Delegate=yes
	I0816 06:00:14.649754    4495 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0816 06:00:14.649761    4495 command_runner.go:130] > KillMode=process
	I0816 06:00:14.649769    4495 command_runner.go:130] > [Install]
	I0816 06:00:14.649781    4495 command_runner.go:130] > WantedBy=multi-user.target
	I0816 06:00:14.649850    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:00:14.663052    4495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 06:00:14.677177    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:00:14.688371    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:00:14.699288    4495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 06:00:14.723884    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:00:14.735509    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:00:14.750493    4495 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0816 06:00:14.750755    4495 ssh_runner.go:195] Run: which cri-dockerd
	I0816 06:00:14.753530    4495 command_runner.go:130] > /usr/bin/cri-dockerd
	I0816 06:00:14.753632    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 06:00:14.761690    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 06:00:14.774974    4495 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 06:00:14.873309    4495 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 06:00:14.971003    4495 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 06:00:14.971078    4495 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 06:00:14.986615    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:15.084813    4495 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 06:00:17.427732    4495 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.342944904s)
	I0816 06:00:17.427791    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 06:00:17.438062    4495 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 06:00:17.450633    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 06:00:17.461045    4495 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 06:00:17.564144    4495 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 06:00:17.663550    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:17.760162    4495 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 06:00:17.774480    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 06:00:17.785611    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:17.890999    4495 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 06:00:17.946400    4495 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 06:00:17.946480    4495 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 06:00:17.950787    4495 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0816 06:00:17.950802    4495 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0816 06:00:17.950807    4495 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0816 06:00:17.950823    4495 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0816 06:00:17.950834    4495 command_runner.go:130] > Access: 2024-08-16 13:00:18.043272298 +0000
	I0816 06:00:17.950847    4495 command_runner.go:130] > Modify: 2024-08-16 13:00:18.043272298 +0000
	I0816 06:00:17.950852    4495 command_runner.go:130] > Change: 2024-08-16 13:00:18.044272176 +0000
	I0816 06:00:17.950856    4495 command_runner.go:130] >  Birth: -
	I0816 06:00:17.951036    4495 start.go:563] Will wait 60s for crictl version
	I0816 06:00:17.951085    4495 ssh_runner.go:195] Run: which crictl
	I0816 06:00:17.953851    4495 command_runner.go:130] > /usr/bin/crictl
	I0816 06:00:17.954463    4495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 06:00:17.979279    4495 command_runner.go:130] > Version:  0.1.0
	I0816 06:00:17.979292    4495 command_runner.go:130] > RuntimeName:  docker
	I0816 06:00:17.979296    4495 command_runner.go:130] > RuntimeVersion:  27.1.2
	I0816 06:00:17.979299    4495 command_runner.go:130] > RuntimeApiVersion:  v1
	I0816 06:00:17.980280    4495 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0816 06:00:17.980344    4495 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 06:00:17.995839    4495 command_runner.go:130] > 27.1.2
	I0816 06:00:17.996678    4495 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 06:00:18.012170    4495 command_runner.go:130] > 27.1.2
	I0816 06:00:18.033519    4495 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0816 06:00:18.033565    4495 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 06:00:18.033913    4495 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0816 06:00:18.038461    4495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 06:00:18.048016    4495 kubeadm.go:883] updating cluster {Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 06:00:18.048115    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:18.048172    4495 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 06:00:18.061303    4495 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0816 06:00:18.061317    4495 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 06:00:18.061322    4495 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0816 06:00:18.061336    4495 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0816 06:00:18.061346    4495 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0816 06:00:18.061349    4495 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0816 06:00:18.061353    4495 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0816 06:00:18.061357    4495 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0816 06:00:18.061366    4495 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 06:00:18.061370    4495 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0816 06:00:18.062136    4495 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 06:00:18.062144    4495 docker.go:615] Images already preloaded, skipping extraction
	I0816 06:00:18.062221    4495 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 06:00:18.074523    4495 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0816 06:00:18.074536    4495 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 06:00:18.074540    4495 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0816 06:00:18.074544    4495 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0816 06:00:18.074548    4495 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0816 06:00:18.074551    4495 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0816 06:00:18.074555    4495 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0816 06:00:18.074559    4495 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0816 06:00:18.074564    4495 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 06:00:18.074568    4495 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0816 06:00:18.075367    4495 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0816 06:00:18.075385    4495 cache_images.go:84] Images are preloaded, skipping loading
	I0816 06:00:18.075396    4495 kubeadm.go:934] updating node { 192.169.0.14 8443 v1.31.0 docker true true} ...
	I0816 06:00:18.075472    4495 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-120000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 06:00:18.075537    4495 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 06:00:18.113175    4495 command_runner.go:130] > cgroupfs
	I0816 06:00:18.113266    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:18.113275    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:18.113302    4495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 06:00:18.113319    4495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-120000 NodeName:multinode-120000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 06:00:18.113408    4495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-120000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 06:00:18.113469    4495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 06:00:18.121038    4495 command_runner.go:130] > kubeadm
	I0816 06:00:18.121048    4495 command_runner.go:130] > kubectl
	I0816 06:00:18.121052    4495 command_runner.go:130] > kubelet
	I0816 06:00:18.121093    4495 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 06:00:18.121136    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 06:00:18.128283    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0816 06:00:18.141829    4495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 06:00:18.155424    4495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0816 06:00:18.168960    4495 ssh_runner.go:195] Run: grep 192.169.0.14	control-plane.minikube.internal$ /etc/hosts
	I0816 06:00:18.171819    4495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 06:00:18.181142    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:18.274536    4495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 06:00:18.289156    4495 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000 for IP: 192.169.0.14
	I0816 06:00:18.289168    4495 certs.go:194] generating shared ca certs ...
	I0816 06:00:18.289180    4495 certs.go:226] acquiring lock for ca certs: {Name:mka8d379c8c727269d4fdbc63829b5acbfd7a90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:18.289367    4495 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key
	I0816 06:00:18.289439    4495 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key
	I0816 06:00:18.289449    4495 certs.go:256] generating profile certs ...
	I0816 06:00:18.289566    4495 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.key
	I0816 06:00:18.289649    4495 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key.70a1c6a2
	I0816 06:00:18.289720    4495 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key
	I0816 06:00:18.289727    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0816 06:00:18.289752    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0816 06:00:18.289771    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0816 06:00:18.289796    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0816 06:00:18.289821    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0816 06:00:18.289854    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0816 06:00:18.289882    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0816 06:00:18.289901    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0816 06:00:18.290009    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem (1338 bytes)
	W0816 06:00:18.290056    4495 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554_empty.pem, impossibly tiny 0 bytes
	I0816 06:00:18.290065    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 06:00:18.290111    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem (1082 bytes)
	I0816 06:00:18.290167    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem (1123 bytes)
	I0816 06:00:18.290209    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem (1679 bytes)
	I0816 06:00:18.290291    4495 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:00:18.290324    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem -> /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.290344    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.290368    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.290888    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 06:00:18.316173    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0816 06:00:18.340496    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 06:00:18.367130    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 06:00:18.393037    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0816 06:00:18.413355    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 06:00:18.433154    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 06:00:18.452867    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 06:00:18.472751    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/1554.pem --> /usr/share/ca-certificates/1554.pem (1338 bytes)
	I0816 06:00:18.492599    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /usr/share/ca-certificates/15542.pem (1708 bytes)
	I0816 06:00:18.512435    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 06:00:18.531920    4495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 06:00:18.545336    4495 ssh_runner.go:195] Run: openssl version
	I0816 06:00:18.549529    4495 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0816 06:00:18.549586    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1554.pem && ln -fs /usr/share/ca-certificates/1554.pem /etc/ssl/certs/1554.pem"
	I0816 06:00:18.557821    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561172    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561269    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 12:29 /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.561303    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1554.pem
	I0816 06:00:18.565440    4495 command_runner.go:130] > 51391683
	I0816 06:00:18.565487    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1554.pem /etc/ssl/certs/51391683.0"
	I0816 06:00:18.573717    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15542.pem && ln -fs /usr/share/ca-certificates/15542.pem /etc/ssl/certs/15542.pem"
	I0816 06:00:18.582023    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585446    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585524    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 12:29 /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.585558    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15542.pem
	I0816 06:00:18.589588    4495 command_runner.go:130] > 3ec20f2e
	I0816 06:00:18.589734    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 06:00:18.597966    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 06:00:18.606288    4495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609721    4495 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609776    4495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 12:20 /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.609813    4495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 06:00:18.613962    4495 command_runner.go:130] > b5213941
	I0816 06:00:18.614025    4495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 06:00:18.622223    4495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 06:00:18.625651    4495 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 06:00:18.625661    4495 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0816 06:00:18.625666    4495 command_runner.go:130] > Device: 253,1	Inode: 5242679     Links: 1
	I0816 06:00:18.625671    4495 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 06:00:18.625677    4495 command_runner.go:130] > Access: 2024-08-16 12:57:57.786133727 +0000
	I0816 06:00:18.625682    4495 command_runner.go:130] > Modify: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625687    4495 command_runner.go:130] > Change: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625691    4495 command_runner.go:130] >  Birth: 2024-08-16 12:54:19.807483068 +0000
	I0816 06:00:18.625754    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 06:00:18.630112    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.630187    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 06:00:18.634437    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.634494    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 06:00:18.638801    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.638908    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 06:00:18.643107    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.643222    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 06:00:18.647403    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.647443    4495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 06:00:18.651643    4495 command_runner.go:130] > Certificate will not expire
	I0816 06:00:18.651701    4495 kubeadm.go:392] StartCluster: {Name:multinode-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:multinode-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.15 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:00:18.651803    4495 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 06:00:18.664662    4495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 06:00:18.672135    4495 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0816 06:00:18.672152    4495 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0816 06:00:18.672158    4495 command_runner.go:130] > /var/lib/minikube/etcd:
	I0816 06:00:18.672162    4495 command_runner.go:130] > member
	I0816 06:00:18.672188    4495 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 06:00:18.672201    4495 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 06:00:18.672247    4495 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 06:00:18.680637    4495 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 06:00:18.680951    4495 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-120000" does not appear in /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:18.681030    4495 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1009/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-120000" cluster setting kubeconfig missing "multinode-120000" context setting]
	I0816 06:00:18.681214    4495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:18.681863    4495 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:18.682072    4495 kapi.go:59] client config for multinode-120000: &rest.Config{Host:"https://192.169.0.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1009/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc6caf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 06:00:18.682394    4495 cert_rotation.go:140] Starting client certificate rotation controller
	I0816 06:00:18.682572    4495 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 06:00:18.689783    4495 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.14
	I0816 06:00:18.689803    4495 kubeadm.go:1160] stopping kube-system containers ...
	I0816 06:00:18.689858    4495 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 06:00:18.704984    4495 command_runner.go:130] > f09b2d4d9690
	I0816 06:00:18.705001    4495 command_runner.go:130] > 856dd8770ce9
	I0816 06:00:18.705005    4495 command_runner.go:130] > 24fec6612d93
	I0816 06:00:18.705009    4495 command_runner.go:130] > 5ae7eceff676
	I0816 06:00:18.705013    4495 command_runner.go:130] > 422de4039b19
	I0816 06:00:18.705034    4495 command_runner.go:130] > 701ae173eac2
	I0816 06:00:18.705041    4495 command_runner.go:130] > 7fb2b2ed4016
	I0816 06:00:18.705045    4495 command_runner.go:130] > 796b051433aa
	I0816 06:00:18.705048    4495 command_runner.go:130] > 5901c509532d
	I0816 06:00:18.705053    4495 command_runner.go:130] > 26d48b6ad6fb
	I0816 06:00:18.705057    4495 command_runner.go:130] > 157135701f7d
	I0816 06:00:18.705061    4495 command_runner.go:130] > a5500cc4ab0e
	I0816 06:00:18.705064    4495 command_runner.go:130] > a92131c1b00a
	I0816 06:00:18.705067    4495 command_runner.go:130] > cbed74cdc18e
	I0816 06:00:18.705074    4495 command_runner.go:130] > df82653f7f9d
	I0816 06:00:18.705077    4495 command_runner.go:130] > c6d3cc10ad7c
	I0816 06:00:18.705080    4495 command_runner.go:130] > 01366dfa40b1
	I0816 06:00:18.705084    4495 command_runner.go:130] > 11af48a0790c
	I0816 06:00:18.705087    4495 command_runner.go:130] > 971f82e6187b
	I0816 06:00:18.705092    4495 command_runner.go:130] > cbb55d45a02c
	I0816 06:00:18.705096    4495 command_runner.go:130] > 10f645568130
	I0816 06:00:18.705099    4495 command_runner.go:130] > deee90d52a28
	I0816 06:00:18.705102    4495 command_runner.go:130] > d370d863b181
	I0816 06:00:18.705105    4495 command_runner.go:130] > f15fb0af4dd4
	I0816 06:00:18.705108    4495 command_runner.go:130] > a92ac57224e4
	I0816 06:00:18.705111    4495 command_runner.go:130] > 83daf80db5c2
	I0816 06:00:18.705114    4495 command_runner.go:130] > d6c5415334b1
	I0816 06:00:18.705118    4495 command_runner.go:130] > c7a8ef69797c
	I0816 06:00:18.705121    4495 command_runner.go:130] > fb52eeaf600d
	I0816 06:00:18.705125    4495 command_runner.go:130] > 69c4ad6acb20
	I0816 06:00:18.705128    4495 command_runner.go:130] > 30668a85ecab
	I0816 06:00:18.705756    4495 docker.go:483] Stopping containers: [f09b2d4d9690 856dd8770ce9 24fec6612d93 5ae7eceff676 422de4039b19 701ae173eac2 7fb2b2ed4016 796b051433aa 5901c509532d 26d48b6ad6fb 157135701f7d a5500cc4ab0e a92131c1b00a cbed74cdc18e df82653f7f9d c6d3cc10ad7c 01366dfa40b1 11af48a0790c 971f82e6187b cbb55d45a02c 10f645568130 deee90d52a28 d370d863b181 f15fb0af4dd4 a92ac57224e4 83daf80db5c2 d6c5415334b1 c7a8ef69797c fb52eeaf600d 69c4ad6acb20 30668a85ecab]
	I0816 06:00:18.705835    4495 ssh_runner.go:195] Run: docker stop f09b2d4d9690 856dd8770ce9 24fec6612d93 5ae7eceff676 422de4039b19 701ae173eac2 7fb2b2ed4016 796b051433aa 5901c509532d 26d48b6ad6fb 157135701f7d a5500cc4ab0e a92131c1b00a cbed74cdc18e df82653f7f9d c6d3cc10ad7c 01366dfa40b1 11af48a0790c 971f82e6187b cbb55d45a02c 10f645568130 deee90d52a28 d370d863b181 f15fb0af4dd4 a92ac57224e4 83daf80db5c2 d6c5415334b1 c7a8ef69797c fb52eeaf600d 69c4ad6acb20 30668a85ecab
	I0816 06:00:18.720707    4495 command_runner.go:130] > f09b2d4d9690
	I0816 06:00:18.720720    4495 command_runner.go:130] > 856dd8770ce9
	I0816 06:00:18.720724    4495 command_runner.go:130] > 24fec6612d93
	I0816 06:00:18.721258    4495 command_runner.go:130] > 5ae7eceff676
	I0816 06:00:18.721337    4495 command_runner.go:130] > 422de4039b19
	I0816 06:00:18.722561    4495 command_runner.go:130] > 701ae173eac2
	I0816 06:00:18.722569    4495 command_runner.go:130] > 7fb2b2ed4016
	I0816 06:00:18.722572    4495 command_runner.go:130] > 796b051433aa
	I0816 06:00:18.722576    4495 command_runner.go:130] > 5901c509532d
	I0816 06:00:18.722579    4495 command_runner.go:130] > 26d48b6ad6fb
	I0816 06:00:18.722583    4495 command_runner.go:130] > 157135701f7d
	I0816 06:00:18.722586    4495 command_runner.go:130] > a5500cc4ab0e
	I0816 06:00:18.722590    4495 command_runner.go:130] > a92131c1b00a
	I0816 06:00:18.722593    4495 command_runner.go:130] > cbed74cdc18e
	I0816 06:00:18.722597    4495 command_runner.go:130] > df82653f7f9d
	I0816 06:00:18.722600    4495 command_runner.go:130] > c6d3cc10ad7c
	I0816 06:00:18.722603    4495 command_runner.go:130] > 01366dfa40b1
	I0816 06:00:18.722607    4495 command_runner.go:130] > 11af48a0790c
	I0816 06:00:18.722610    4495 command_runner.go:130] > 971f82e6187b
	I0816 06:00:18.722623    4495 command_runner.go:130] > cbb55d45a02c
	I0816 06:00:18.722627    4495 command_runner.go:130] > 10f645568130
	I0816 06:00:18.722630    4495 command_runner.go:130] > deee90d52a28
	I0816 06:00:18.722633    4495 command_runner.go:130] > d370d863b181
	I0816 06:00:18.722637    4495 command_runner.go:130] > f15fb0af4dd4
	I0816 06:00:18.722652    4495 command_runner.go:130] > a92ac57224e4
	I0816 06:00:18.722659    4495 command_runner.go:130] > 83daf80db5c2
	I0816 06:00:18.722662    4495 command_runner.go:130] > d6c5415334b1
	I0816 06:00:18.722666    4495 command_runner.go:130] > c7a8ef69797c
	I0816 06:00:18.722670    4495 command_runner.go:130] > fb52eeaf600d
	I0816 06:00:18.722677    4495 command_runner.go:130] > 69c4ad6acb20
	I0816 06:00:18.722681    4495 command_runner.go:130] > 30668a85ecab
	I0816 06:00:18.723245    4495 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 06:00:18.735477    4495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 06:00:18.742861    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0816 06:00:18.742883    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0816 06:00:18.742890    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0816 06:00:18.742897    4495 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 06:00:18.742921    4495 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 06:00:18.742926    4495 kubeadm.go:157] found existing configuration files:
	
	I0816 06:00:18.742969    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 06:00:18.749869    4495 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 06:00:18.749886    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 06:00:18.749919    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 06:00:18.757017    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 06:00:18.764154    4495 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 06:00:18.764171    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 06:00:18.764207    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 06:00:18.771390    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 06:00:18.778374    4495 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 06:00:18.778486    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 06:00:18.778526    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 06:00:18.785736    4495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 06:00:18.792788    4495 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 06:00:18.792810    4495 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 06:00:18.792853    4495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 06:00:18.800180    4495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 06:00:18.807474    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:18.876944    4495 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 06:00:18.877095    4495 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0816 06:00:18.877286    4495 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0816 06:00:18.877428    4495 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 06:00:18.877660    4495 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0816 06:00:18.877817    4495 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0816 06:00:18.878113    4495 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0816 06:00:18.878292    4495 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0816 06:00:18.878442    4495 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0816 06:00:18.878558    4495 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 06:00:18.878682    4495 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 06:00:18.878862    4495 command_runner.go:130] > [certs] Using the existing "sa" key
	I0816 06:00:18.879716    4495 command_runner.go:130] ! W0816 13:00:19.014566    1405 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:18.879730    4495 command_runner.go:130] ! W0816 13:00:19.015145    1405 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:18.879822    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:18.913639    4495 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 06:00:19.126123    4495 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 06:00:19.196694    4495 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 06:00:19.271744    4495 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 06:00:19.471211    4495 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 06:00:19.536733    4495 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 06:00:19.538626    4495 command_runner.go:130] ! W0816 13:00:19.052252    1410 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.538650    4495 command_runner.go:130] ! W0816 13:00:19.052914    1410 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.538687    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.588124    4495 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 06:00:19.592966    4495 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 06:00:19.592976    4495 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0816 06:00:19.696720    4495 command_runner.go:130] ! W0816 13:00:19.714655    1414 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.696744    4495 command_runner.go:130] ! W0816 13:00:19.715264    1414 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.696756    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.752739    4495 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 06:00:19.752753    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 06:00:19.754873    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 06:00:19.755884    4495 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 06:00:19.758437    4495 command_runner.go:130] ! W0816 13:00:19.893078    1441 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.758458    4495 command_runner.go:130] ! W0816 13:00:19.893553    1441 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.758532    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:19.871944    4495 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 06:00:19.877346    4495 command_runner.go:130] ! W0816 13:00:20.007621    1449 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.877370    4495 command_runner.go:130] ! W0816 13:00:20.008144    1449 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:19.877395    4495 api_server.go:52] waiting for apiserver process to appear ...
	I0816 06:00:19.877446    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:20.379632    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:20.878610    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.377548    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.878597    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:21.890976    4495 command_runner.go:130] > 1735
	I0816 06:00:21.891116    4495 api_server.go:72] duration metric: took 2.013765981s to wait for apiserver process to appear ...
	I0816 06:00:21.891126    4495 api_server.go:88] waiting for apiserver healthz status ...
	I0816 06:00:21.891144    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.629177    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 06:00:23.629204    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 06:00:23.629213    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.650033    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 06:00:23.650048    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 06:00:23.891437    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:23.903552    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 06:00:23.903567    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 06:00:24.391754    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:24.394976    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0816 06:00:24.394988    4495 api_server.go:103] status: https://192.169.0.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0816 06:00:24.891376    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:24.897310    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0816 06:00:24.897388    4495 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0816 06:00:24.897396    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:24.897405    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:24.897408    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:24.902527    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:24.902536    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:24.902541    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:24.902545    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:24.902549    4495 round_trippers.go:580]     Content-Length: 263
	I0816 06:00:24.902552    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:24.902554    4495 round_trippers.go:580]     Audit-Id: b0213136-2235-4ab5-968c-cd4581e041f0
	I0816 06:00:24.902556    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:24.902558    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:24.902579    4495 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0816 06:00:24.902624    4495 api_server.go:141] control plane version: v1.31.0
	I0816 06:00:24.902634    4495 api_server.go:131] duration metric: took 3.011562911s to wait for apiserver health ...
	I0816 06:00:24.902639    4495 cni.go:84] Creating CNI manager for ""
	I0816 06:00:24.902643    4495 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0816 06:00:24.940103    4495 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 06:00:24.960494    4495 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 06:00:24.965300    4495 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0816 06:00:24.965324    4495 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0816 06:00:24.965331    4495 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0816 06:00:24.965336    4495 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0816 06:00:24.965340    4495 command_runner.go:130] > Access: 2024-08-16 13:00:11.117693160 +0000
	I0816 06:00:24.965345    4495 command_runner.go:130] > Modify: 2024-08-14 20:00:07.000000000 +0000
	I0816 06:00:24.965350    4495 command_runner.go:130] > Change: 2024-08-16 13:00:08.930127539 +0000
	I0816 06:00:24.965353    4495 command_runner.go:130] >  Birth: -
	I0816 06:00:24.965491    4495 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 06:00:24.965500    4495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 06:00:24.980343    4495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 06:00:25.417057    4495 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0816 06:00:25.417072    4495 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0816 06:00:25.417077    4495 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0816 06:00:25.417081    4495 command_runner.go:130] > daemonset.apps/kindnet configured
	I0816 06:00:25.417181    4495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 06:00:25.417223    4495 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 06:00:25.417233    4495 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 06:00:25.417281    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:25.417286    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.417292    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.417296    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.420085    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.420093    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.420099    4495 round_trippers.go:580]     Audit-Id: 23577c90-0bb1-4f7c-9a81-9fee35307577
	I0816 06:00:25.420102    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.420105    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.420108    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.420110    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.420112    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.421556    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1226"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90806 chars]
	I0816 06:00:25.424642    4495 system_pods.go:59] 12 kube-system pods found
	I0816 06:00:25.424658    4495 system_pods.go:61] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 06:00:25.424664    4495 system_pods.go:61] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 06:00:25.424669    4495 system_pods.go:61] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:25.424673    4495 system_pods.go:61] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:25.424678    4495 system_pods.go:61] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0816 06:00:25.424685    4495 system_pods.go:61] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 06:00:25.424690    4495 system_pods.go:61] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 06:00:25.424697    4495 system_pods.go:61] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 06:00:25.424701    4495 system_pods.go:61] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:25.424704    4495 system_pods.go:61] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:25.424708    4495 system_pods.go:61] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 06:00:25.424712    4495 system_pods.go:61] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:25.424716    4495 system_pods.go:74] duration metric: took 7.527876ms to wait for pod list to return data ...
	I0816 06:00:25.424721    4495 node_conditions.go:102] verifying NodePressure condition ...
	I0816 06:00:25.424758    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:25.424762    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.424768    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.424770    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.426447    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.426456    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.426461    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.426465    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.426469    4495 round_trippers.go:580]     Audit-Id: f8dd1ea7-5255-4e2a-b32e-e6d9ddd07538
	I0816 06:00:25.426472    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.426475    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.426477    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.426636    4495 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1226"},"items":[{"metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10144 chars]
	I0816 06:00:25.427052    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:25.427064    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:25.427072    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:25.427076    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:25.427080    4495 node_conditions.go:105] duration metric: took 2.355042ms to run NodePressure ...
	I0816 06:00:25.427089    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 06:00:25.573807    4495 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0816 06:00:25.723217    4495 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0816 06:00:25.724311    4495 command_runner.go:130] ! W0816 13:00:25.658424    2250 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:25.724332    4495 command_runner.go:130] ! W0816 13:00:25.658861    2250 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 06:00:25.724346    4495 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0816 06:00:25.724405    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0816 06:00:25.724411    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.724416    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.724421    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.726729    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.726739    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.726744    4495 round_trippers.go:580]     Audit-Id: cc78fe2a-eab3-44ca-8902-147009b93ca2
	I0816 06:00:25.726749    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.726752    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.726755    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.726758    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.726760    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.727321    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1232"},"items":[{"metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1162","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 31223 chars]
	I0816 06:00:25.728024    4495 kubeadm.go:739] kubelet initialised
	I0816 06:00:25.728034    4495 kubeadm.go:740] duration metric: took 3.679784ms waiting for restarted kubelet to initialise ...
	I0816 06:00:25.728041    4495 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:25.728071    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:25.728076    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.728081    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.728086    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.729698    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.729705    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.729710    4495 round_trippers.go:580]     Audit-Id: 05ad863b-28d8-480c-9de8-26a2f4258f47
	I0816 06:00:25.729716    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.729720    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.729724    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.729729    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.729733    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.730435    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1232"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89949 chars]
	I0816 06:00:25.732286    4495 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.732322    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:25.732327    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.732333    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.732336    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.733399    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.733408    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.733415    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.733425    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.733430    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.733435    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.733440    4495 round_trippers.go:580]     Audit-Id: 01517cdb-0de0-4cb1-aea3-3efd79fb52fc
	I0816 06:00:25.733442    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.733553    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:25.733792    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.733799    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.733804    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.733808    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.734835    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:25.734842    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.734848    4495 round_trippers.go:580]     Audit-Id: 9973ba49-60dc-4b43-9ca8-b6c9059303eb
	I0816 06:00:25.734851    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.734855    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.734859    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.734864    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.734869    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.734977    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.735143    4495 pod_ready.go:98] node "multinode-120000" hosting pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.735152    4495 pod_ready.go:82] duration metric: took 2.857102ms for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.735157    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.735163    4495 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.735190    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-120000
	I0816 06:00:25.735195    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.735200    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.735203    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.736148    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.736155    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.736160    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.736165    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.736173    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.736176    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.736179    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.736181    4495 round_trippers.go:580]     Audit-Id: cf8ef4ef-e416-4edc-8b8f-7b1951b093c3
	I0816 06:00:25.736290    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1162","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6888 chars]
	I0816 06:00:25.736506    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.736513    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.736519    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.736522    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.737386    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.737394    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.737399    4495 round_trippers.go:580]     Audit-Id: 97b12066-edce-4362-b4b0-406a5c2db88f
	I0816 06:00:25.737423    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.737427    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.737430    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.737434    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.737437    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.737512    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.737673    4495 pod_ready.go:98] node "multinode-120000" hosting pod "etcd-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.737681    4495 pod_ready.go:82] duration metric: took 2.51426ms for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.737687    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "etcd-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.737697    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.737722    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-120000
	I0816 06:00:25.737727    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.737732    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.737735    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.738701    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.738709    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.738714    4495 round_trippers.go:580]     Audit-Id: b2d355e4-df38-4ab8-9d62-c1a6bb40d6ff
	I0816 06:00:25.738719    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.738723    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.738728    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.738732    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.738737    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.738849    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-120000","namespace":"kube-system","uid":"6811daff-acfb-4752-939b-3d084a8a4c9a","resourceVersion":"1180","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.mirror":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479305Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0816 06:00:25.739067    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.739074    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.739079    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.739083    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.740009    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.740016    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.740021    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.740025    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.740029    4495 round_trippers.go:580]     Audit-Id: 24c29403-1f7b-48ae-93a5-9697e6ec2d8e
	I0816 06:00:25.740033    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.740036    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.740048    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.740148    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.740312    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-apiserver-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.740321    4495 pod_ready.go:82] duration metric: took 2.619144ms for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.740326    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-apiserver-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.740331    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:25.740358    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-120000
	I0816 06:00:25.740363    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.740368    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.740372    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.741298    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:25.741305    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.741309    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.741314    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.741318    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.741321    4495 round_trippers.go:580]     Audit-Id: 0f7db6e3-eed2-4c72-a02c-3eeaf9a32775
	I0816 06:00:25.741325    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.741328    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.741443    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-120000","namespace":"kube-system","uid":"67f0047c-62f5-4c90-bee3-40dc18cb33e6","resourceVersion":"1164","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.mirror":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0816 06:00:25.817742    4495 request.go:632] Waited for 76.054567ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.817838    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:25.817851    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:25.817863    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:25.817874    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:25.820036    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:25.820049    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:25.820056    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:25 GMT
	I0816 06:00:25.820061    4495 round_trippers.go:580]     Audit-Id: 53261c96-093b-47b4-8b80-8acc12f82fc6
	I0816 06:00:25.820066    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:25.820070    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:25.820075    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:25.820079    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:25.820399    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:25.820671    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-controller-manager-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.820685    4495 pod_ready.go:82] duration metric: took 80.349079ms for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:25.820693    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-controller-manager-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:25.820701    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.017795    4495 request.go:632] Waited for 197.051168ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:26.017943    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:26.017961    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.017977    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.017985    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.020436    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.020452    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.020460    4495 round_trippers.go:580]     Audit-Id: e99291a1-6610-4a48-8c1e-071af19761ec
	I0816 06:00:26.020465    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.020469    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.020473    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.020477    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.020481    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.020590    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msbdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dff96db-7737-4e41-a130-a356e3acfd78","resourceVersion":"1229","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0816 06:00:26.218687    4495 request.go:632] Waited for 197.736243ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:26.218777    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:26.218788    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.218800    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.218806    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.221733    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.221746    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.221753    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.221757    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.221798    4495 round_trippers.go:580]     Audit-Id: 948120ca-1b7c-4af3-86bb-5928f644b442
	I0816 06:00:26.221834    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.221841    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.221844    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.222464    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1145","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5293 chars]
	I0816 06:00:26.222688    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-proxy-msbdc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:26.222699    4495 pod_ready.go:82] duration metric: took 401.999873ms for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:26.222706    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-proxy-msbdc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:26.222711    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.417715    4495 request.go:632] Waited for 194.965416ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:26.417821    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:26.417833    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.417844    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.417850    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.419933    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.419950    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.419958    4495 round_trippers.go:580]     Audit-Id: a27eaab3-1fba-4261-86c8-4e82bd692723
	I0816 06:00:26.419963    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.419968    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.419973    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.419979    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.419985    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.420386    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vskxm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9b8ca3d-b5bd-4c44-8579-8b31879629ad","resourceVersion":"1104","creationTimestamp":"2024-08-16T12:56:05Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:56:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:26.618403    4495 request.go:632] Waited for 197.652521ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:26.618498    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:26.618509    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.618520    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.618529    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.621076    4495 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 06:00:26.621098    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.621108    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.621115    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.621122    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.621131    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.621136    4495 round_trippers.go:580]     Content-Length: 210
	I0816 06:00:26.621144    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.621150    4495 round_trippers.go:580]     Audit-Id: 84189acc-381e-445f-ab2d-83d8d34625a0
	I0816 06:00:26.621167    4495 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-120000-m03\" not found","reason":"NotFound","details":{"name":"multinode-120000-m03","kind":"nodes"},"code":404}
	I0816 06:00:26.621365    4495 pod_ready.go:98] node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:26.621383    4495 pod_ready.go:82] duration metric: took 398.673493ms for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:26.621394    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:26.621403    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:26.817948    4495 request.go:632] Waited for 196.480552ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:26.818086    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:26.818097    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:26.818109    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:26.818118    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:26.821076    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:26.821091    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:26.821099    4495 round_trippers.go:580]     Audit-Id: 04ca7c33-61a5-4c68-8785-276e1a624d18
	I0816 06:00:26.821103    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:26.821108    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:26.821113    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:26.821116    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:26.821119    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:26 GMT
	I0816 06:00:26.821219    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x88cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"21efba47-35db-47ba-ace5-119b04bf7355","resourceVersion":"1001","creationTimestamp":"2024-08-16T12:55:15Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:55:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:27.017388    4495 request.go:632] Waited for 195.844198ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:27.017448    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:27.017453    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.017460    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.017463    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.023306    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:27.023320    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.023326    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.023329    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.023331    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.023334    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.023337    4495 round_trippers.go:580]     Audit-Id: d3066e80-b4e4-42de-b6dd-67c5c1ca1bb5
	I0816 06:00:27.023340    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.023391    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000-m02","uid":"57b3de1e-d3de-4534-9ecc-a0706c682584","resourceVersion":"1019","creationTimestamp":"2024-08-16T12:58:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_16T05_58_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:58:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3805 chars]
	I0816 06:00:27.023559    4495 pod_ready.go:93] pod "kube-proxy-x88cp" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:27.023568    4495 pod_ready.go:82] duration metric: took 402.164034ms for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:27.023575    4495 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:27.217753    4495 request.go:632] Waited for 194.138319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:27.217911    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:27.217921    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.217932    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.217939    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.220647    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:27.220661    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.220668    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.220674    4495 round_trippers.go:580]     Audit-Id: 91c9d444-e56e-411b-b37d-b5ea87e6b50a
	I0816 06:00:27.220678    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.220682    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.220686    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.220690    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.221047    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-120000","namespace":"kube-system","uid":"b8188bb8-5278-422d-86a5-19d70c796638","resourceVersion":"1182","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.mirror":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.seen":"2024-08-16T12:54:27.908480653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0816 06:00:27.418903    4495 request.go:632] Waited for 197.499014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.418992    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.419003    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.419015    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.419025    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.422042    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:27.422058    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.422065    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.422070    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.422073    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.422078    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.422082    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.422087    4495 round_trippers.go:580]     Audit-Id: 32177227-9b65-4b4b-a32b-46d9331444aa
	I0816 06:00:27.422247    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:27.422512    4495 pod_ready.go:98] node "multinode-120000" hosting pod "kube-scheduler-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:27.422526    4495 pod_ready.go:82] duration metric: took 398.952661ms for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:27.422535    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000" hosting pod "kube-scheduler-multinode-120000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-120000" has status "Ready":"False"
	I0816 06:00:27.422541    4495 pod_ready.go:39] duration metric: took 1.694527427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:27.422554    4495 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 06:00:27.433482    4495 command_runner.go:130] > -16
	I0816 06:00:27.433765    4495 ops.go:34] apiserver oom_adj: -16
	I0816 06:00:27.433777    4495 kubeadm.go:597] duration metric: took 8.761739587s to restartPrimaryControlPlane
	I0816 06:00:27.433782    4495 kubeadm.go:394] duration metric: took 8.782260965s to StartCluster
	I0816 06:00:27.433791    4495 settings.go:142] acquiring lock: {Name:mkb3c8aac25c21025142737c3a236d96f65e9fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:27.433883    4495 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:00:27.434235    4495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/kubeconfig: {Name:mk6915a0ba589d1dc80279bf4163d9ba725a7b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:00:27.434492    4495 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:00:27.434547    4495 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 06:00:27.434663    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:27.457036    4495 out.go:177] * Verifying Kubernetes components...
	I0816 06:00:27.497964    4495 out.go:177] * Enabled addons: 
	I0816 06:00:27.519104    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:00:27.555967    4495 addons.go:510] duration metric: took 121.425372ms for enable addons: enabled=[]
	I0816 06:00:27.670896    4495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 06:00:27.683537    4495 node_ready.go:35] waiting up to 6m0s for node "multinode-120000" to be "Ready" ...
	I0816 06:00:27.683593    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:27.683599    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:27.683605    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:27.683610    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:27.685158    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:27.685166    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:27.685171    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:27.685174    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:27 GMT
	I0816 06:00:27.685178    4495 round_trippers.go:580]     Audit-Id: f0c9398f-4e83-4228-96b6-078b55620838
	I0816 06:00:27.685184    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:27.685188    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:27.685203    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:27.685587    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:28.183809    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:28.183891    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:28.183907    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:28.183914    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:28.186432    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:28.186448    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:28.186454    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:28.186459    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:28.186463    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:28.186467    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:28 GMT
	I0816 06:00:28.186470    4495 round_trippers.go:580]     Audit-Id: 6df01f5a-6454-4835-8410-d6deec65b5ee
	I0816 06:00:28.186473    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:28.186592    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:28.684413    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:28.684439    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:28.684450    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:28.684456    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:28.687367    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:28.687382    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:28.687388    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:28.687392    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:28.687395    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:28.687411    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:28.687418    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:28 GMT
	I0816 06:00:28.687421    4495 round_trippers.go:580]     Audit-Id: f5fbda18-63c7-4d36-a04a-469856ac6643
	I0816 06:00:28.687486    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.184833    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:29.184857    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:29.184868    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:29.184876    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:29.187720    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:29.187736    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:29.187743    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:29.187748    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:29 GMT
	I0816 06:00:29.187752    4495 round_trippers.go:580]     Audit-Id: 081e0a8c-5076-4160-9bf5-e86ebbed3097
	I0816 06:00:29.187758    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:29.187764    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:29.187770    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:29.188009    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.685089    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:29.685112    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:29.685124    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:29.685130    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:29.687629    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:29.687644    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:29.687654    4495 round_trippers.go:580]     Audit-Id: 234ea32c-891a-4ac1-a955-830b5b99ce3a
	I0816 06:00:29.687663    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:29.687668    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:29.687672    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:29.687676    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:29.687682    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:29 GMT
	I0816 06:00:29.687816    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:29.688079    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:30.184844    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:30.184865    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:30.184877    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:30.184884    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:30.187641    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:30.187660    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:30.187670    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:30.187678    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:30.187689    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:30 GMT
	I0816 06:00:30.187693    4495 round_trippers.go:580]     Audit-Id: c31de6db-811b-46b8-9aa4-623b561b164a
	I0816 06:00:30.187698    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:30.187703    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:30.188119    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:30.685502    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:30.685524    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:30.685536    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:30.685541    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:30.688085    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:30.688104    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:30.688112    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:30.688117    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:30 GMT
	I0816 06:00:30.688121    4495 round_trippers.go:580]     Audit-Id: 68e46f04-879a-40b2-8856-7face0d9c06e
	I0816 06:00:30.688126    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:30.688129    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:30.688133    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:30.688252    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:31.184738    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:31.184763    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:31.184774    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:31.184782    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:31.187254    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:31.187274    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:31.187284    4495 round_trippers.go:580]     Audit-Id: 57b4bd15-0c5c-4895-b2ea-51ec55b072eb
	I0816 06:00:31.187290    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:31.187295    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:31.187298    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:31.187302    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:31.187305    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:31 GMT
	I0816 06:00:31.187490    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:31.683735    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:31.683758    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:31.683770    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:31.683778    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:31.686293    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:31.686333    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:31.686346    4495 round_trippers.go:580]     Audit-Id: 9801362e-88df-4675-a2c8-0e91d5be5614
	I0816 06:00:31.686351    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:31.686354    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:31.686357    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:31.686362    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:31.686367    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:31 GMT
	I0816 06:00:31.686420    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:32.184604    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:32.184632    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:32.184644    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:32.184651    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:32.187173    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:32.187193    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:32.187201    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:32.187205    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:32.187210    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:32.187215    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:32.187220    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:32 GMT
	I0816 06:00:32.187224    4495 round_trippers.go:580]     Audit-Id: 1dbf9d45-c794-4697-a381-7fb61cb7609f
	I0816 06:00:32.187552    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:32.187830    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:32.684978    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:32.685004    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:32.685018    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:32.685026    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:32.687730    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:32.687753    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:32.687760    4495 round_trippers.go:580]     Audit-Id: c40c4e2b-e3de-4b0e-81b7-3a40fe487bfc
	I0816 06:00:32.687766    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:32.687771    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:32.687778    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:32.687781    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:32.687784    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:32 GMT
	I0816 06:00:32.688043    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:33.185794    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:33.185819    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:33.185831    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:33.185839    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:33.188963    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:33.188979    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:33.188986    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:33.188991    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:33.188995    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:33.189000    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:33 GMT
	I0816 06:00:33.189003    4495 round_trippers.go:580]     Audit-Id: 42192a3c-1770-4ab8-bdac-a9129416eb0a
	I0816 06:00:33.189007    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:33.189171    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:33.685058    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:33.685086    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:33.685098    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:33.685103    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:33.687708    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:33.687728    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:33.687736    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:33.687741    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:33 GMT
	I0816 06:00:33.687745    4495 round_trippers.go:580]     Audit-Id: 465b534b-271d-4b20-a0a5-d90509c1e5e7
	I0816 06:00:33.687748    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:33.687760    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:33.687764    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:33.687869    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.183737    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:34.183762    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:34.183774    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:34.183780    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:34.186354    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:34.186372    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:34.186379    4495 round_trippers.go:580]     Audit-Id: 40ff16d2-5ea9-46e9-b903-a0d03c7e14e0
	I0816 06:00:34.186383    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:34.186388    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:34.186393    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:34.186398    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:34.186402    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:34 GMT
	I0816 06:00:34.186756    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.684390    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:34.684414    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:34.684425    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:34.684433    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:34.686894    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:34.686911    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:34.686919    4495 round_trippers.go:580]     Audit-Id: ed86880d-2a2e-4592-abc2-f4e2c9222b99
	I0816 06:00:34.686925    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:34.686932    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:34.686937    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:34.686942    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:34.686947    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:34 GMT
	I0816 06:00:34.687229    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:34.687492    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:35.183952    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:35.183982    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:35.183994    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:35.184001    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:35.186228    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:35.186243    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:35.186251    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:35.186257    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:35 GMT
	I0816 06:00:35.186263    4495 round_trippers.go:580]     Audit-Id: bd5c6e0f-5321-4600-8834-753aad42096d
	I0816 06:00:35.186270    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:35.186276    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:35.186280    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:35.186477    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:35.683869    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:35.683896    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:35.683908    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:35.683917    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:35.686865    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:35.686907    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:35.686919    4495 round_trippers.go:580]     Audit-Id: c0a6029a-329b-4bd4-b351-0e3b6945e48f
	I0816 06:00:35.686925    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:35.686931    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:35.686934    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:35.686938    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:35.686942    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:35 GMT
	I0816 06:00:35.687074    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.185651    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:36.185673    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:36.185684    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:36.185690    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:36.188463    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:36.188478    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:36.188485    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:36 GMT
	I0816 06:00:36.188489    4495 round_trippers.go:580]     Audit-Id: 35a69241-cd12-49d9-bfe6-edf0ffb501d2
	I0816 06:00:36.188494    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:36.188498    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:36.188503    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:36.188506    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:36.188885    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.685754    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:36.685792    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:36.685804    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:36.685814    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:36.688468    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:36.688484    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:36.688492    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:36 GMT
	I0816 06:00:36.688496    4495 round_trippers.go:580]     Audit-Id: a8a2027a-ff3b-4dc3-8bfb-30408f06db19
	I0816 06:00:36.688499    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:36.688503    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:36.688508    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:36.688514    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:36.688657    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:36.688938    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:37.183701    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:37.183727    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:37.183760    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:37.183774    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:37.186458    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:37.186474    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:37.186481    4495 round_trippers.go:580]     Audit-Id: 2478ee78-4c5b-4854-b2d7-40ed83bfe8ff
	I0816 06:00:37.186486    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:37.186490    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:37.186494    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:37.186497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:37.186518    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:37 GMT
	I0816 06:00:37.186636    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:37.684137    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:37.684165    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:37.684177    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:37.684183    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:37.686997    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:37.687018    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:37.687026    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:37.687030    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:37.687034    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:37.687037    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:37.687041    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:37 GMT
	I0816 06:00:37.687045    4495 round_trippers.go:580]     Audit-Id: 127265de-bf18-4af5-8df5-7bec55284a59
	I0816 06:00:37.687121    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:38.184225    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:38.184253    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:38.184265    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:38.184273    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:38.187123    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:38.187143    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:38.187151    4495 round_trippers.go:580]     Audit-Id: 6a8f5a9d-d3ea-4bb8-a712-7ea75e216b5e
	I0816 06:00:38.187156    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:38.187163    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:38.187170    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:38.187174    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:38.187177    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:38 GMT
	I0816 06:00:38.187381    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:38.683932    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:38.683953    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:38.683963    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:38.683969    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:38.686689    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:38.686709    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:38.686717    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:38.686724    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:38.686730    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:38 GMT
	I0816 06:00:38.686735    4495 round_trippers.go:580]     Audit-Id: 13107fe4-d471-4b9f-a2c0-ca34ceaf8ab9
	I0816 06:00:38.686742    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:38.686755    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:38.686834    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:39.185629    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:39.185654    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:39.185666    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:39.185674    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:39.188313    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:39.188330    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:39.188337    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:39.188341    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:39 GMT
	I0816 06:00:39.188345    4495 round_trippers.go:580]     Audit-Id: eb10f5a2-8dac-4080-825b-536b57d39a01
	I0816 06:00:39.188348    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:39.188351    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:39.188354    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:39.188464    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:39.188717    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:39.683570    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:39.683592    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:39.683601    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:39.683608    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:39.686404    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:39.686424    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:39.686435    4495 round_trippers.go:580]     Audit-Id: aff6da3d-a5e1-488d-b69f-21516c86cf33
	I0816 06:00:39.686442    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:39.686448    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:39.686454    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:39.686458    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:39.686462    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:39 GMT
	I0816 06:00:39.686608    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:40.184776    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:40.184800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:40.184810    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:40.184815    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:40.187560    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:40.187579    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:40.187586    4495 round_trippers.go:580]     Audit-Id: e429582a-c575-4dc0-895b-02821e0827a1
	I0816 06:00:40.187590    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:40.187593    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:40.187598    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:40.187601    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:40.187605    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:40 GMT
	I0816 06:00:40.187691    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:40.684417    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:40.684440    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:40.684453    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:40.684459    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:40.686945    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:40.686960    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:40.686967    4495 round_trippers.go:580]     Audit-Id: 81c9f382-cd6f-4411-81ab-38740da9540e
	I0816 06:00:40.686971    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:40.686975    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:40.686980    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:40.686983    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:40.686987    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:40 GMT
	I0816 06:00:40.687114    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.184726    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:41.184753    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:41.184806    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:41.184819    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:41.187612    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:41.187630    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:41.187637    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:41.187644    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:41 GMT
	I0816 06:00:41.187649    4495 round_trippers.go:580]     Audit-Id: f230e89f-5e88-4a7a-9ffc-76611236aaa1
	I0816 06:00:41.187657    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:41.187663    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:41.187670    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:41.187804    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.683710    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:41.683734    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:41.683746    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:41.683753    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:41.686547    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:41.686566    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:41.686574    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:41.686579    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:41.686583    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:41.686587    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:41.686591    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:41 GMT
	I0816 06:00:41.686594    4495 round_trippers.go:580]     Audit-Id: acf1f7ce-0a7a-4579-a4f5-060d34eacfeb
	I0816 06:00:41.686670    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:41.686926    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:42.184911    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:42.184938    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:42.184949    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:42.184956    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:42.187592    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:42.187610    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:42.187618    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:42.187623    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:42.187631    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:42.187636    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:42 GMT
	I0816 06:00:42.187646    4495 round_trippers.go:580]     Audit-Id: b7bf8e18-547a-45a8-92e5-25d0ca741602
	I0816 06:00:42.187655    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:42.188153    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:42.683685    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:42.683708    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:42.683720    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:42.683727    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:42.686857    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:42.686871    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:42.686878    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:42.686883    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:42 GMT
	I0816 06:00:42.686888    4495 round_trippers.go:580]     Audit-Id: d0f137dc-e037-45ea-9113-8fb55c2cd3c5
	I0816 06:00:42.686892    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:42.686896    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:42.686901    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:42.687035    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.185152    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:43.185182    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:43.185194    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:43.185199    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:43.187891    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:43.187907    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:43.187927    4495 round_trippers.go:580]     Audit-Id: 055a2ac7-7e92-479f-9d5b-07e6f6b9eced
	I0816 06:00:43.187933    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:43.187936    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:43.187939    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:43.187966    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:43.187976    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:43 GMT
	I0816 06:00:43.188092    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.684397    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:43.684421    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:43.684431    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:43.684438    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:43.686863    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:43.686877    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:43.686884    4495 round_trippers.go:580]     Audit-Id: ca3700c8-a7c6-42a0-aacd-58a30b3608e4
	I0816 06:00:43.686889    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:43.686893    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:43.686897    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:43.686901    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:43.686904    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:43 GMT
	I0816 06:00:43.687211    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1246","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5509 chars]
	I0816 06:00:43.687468    4495 node_ready.go:53] node "multinode-120000" has status "Ready":"False"
	I0816 06:00:44.184703    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.184725    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.184737    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.184743    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.187423    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:44.187436    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.187444    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.187448    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.187452    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.187457    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.187462    4495 round_trippers.go:580]     Audit-Id: 784dc758-68eb-43c3-84fa-47675351fb5a
	I0816 06:00:44.187465    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.187618    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:44.187882    4495 node_ready.go:49] node "multinode-120000" has status "Ready":"True"
	I0816 06:00:44.187897    4495 node_ready.go:38] duration metric: took 16.504665202s for node "multinode-120000" to be "Ready" ...
	I0816 06:00:44.187904    4495 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:44.187937    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:44.187942    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.187948    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.187951    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.189781    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.189790    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.189799    4495 round_trippers.go:580]     Audit-Id: 1a053412-eda8-4ab9-8655-bacda9460671
	I0816 06:00:44.189809    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.189813    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.189817    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.189821    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.189824    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.190831    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1295"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88975 chars]
	I0816 06:00:44.192799    4495 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:44.192835    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:44.192840    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.192846    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.192851    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.194016    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.194023    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.194028    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.194032    4495 round_trippers.go:580]     Audit-Id: 3161d64a-c1e8-4bcf-8d1b-51dd4d9571b6
	I0816 06:00:44.194036    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.194041    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.194048    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.194054    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.194192    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:44.194440    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.194447    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.194453    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.194458    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.195443    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:44.195452    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.195457    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.195460    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.195463    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.195471    4495 round_trippers.go:580]     Audit-Id: 5d3e841f-7e1d-475a-a0d9-e6cfb1ae83a3
	I0816 06:00:44.195475    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.195477    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.195624    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:44.694002    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:44.694071    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.694084    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.694091    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.696555    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:44.696573    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.696585    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.696593    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.696599    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.696607    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.696613    4495 round_trippers.go:580]     Audit-Id: e35f4581-6dd6-4921-b6c8-a03fc475e76f
	I0816 06:00:44.696619    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.696931    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:44.697320    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:44.697330    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:44.697338    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:44.697344    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:44.698887    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:44.698895    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:44.698900    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:44 GMT
	I0816 06:00:44.698904    4495 round_trippers.go:580]     Audit-Id: eba3e30d-45c9-4664-abb2-01ab58337ffb
	I0816 06:00:44.698908    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:44.698913    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:44.698919    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:44.698923    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:44.699083    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:45.193590    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:45.193613    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.193624    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.193630    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.196063    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:45.196079    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.196086    4495 round_trippers.go:580]     Audit-Id: d94cc2d6-a5e0-46d1-846b-4e2ffaad5215
	I0816 06:00:45.196090    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.196094    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.196097    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.196100    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.196105    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.196401    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:45.196791    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:45.196800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.196808    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.196813    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.198243    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:45.198250    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.198256    4495 round_trippers.go:580]     Audit-Id: 436a3839-50fc-41e0-aefb-994dfc82586f
	I0816 06:00:45.198259    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.198268    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.198273    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.198276    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.198278    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.198428    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:45.693974    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:45.693997    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.694010    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.694019    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.696758    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:45.696772    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.696779    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.696792    4495 round_trippers.go:580]     Audit-Id: 8415e6f8-e946-41af-a80c-15c5751d219f
	I0816 06:00:45.696797    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.696801    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.696805    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.696809    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.696954    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:45.697357    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:45.697366    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:45.697374    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:45.697377    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:45.698909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:45.698916    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:45.698920    4495 round_trippers.go:580]     Audit-Id: 134cd26c-86d3-4b3a-af71-84933a0d88ae
	I0816 06:00:45.698924    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:45.698927    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:45.698929    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:45.698932    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:45.698934    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:45 GMT
	I0816 06:00:45.699056    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:46.193630    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:46.193654    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.193666    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.193674    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.196336    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:46.196353    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.196360    4495 round_trippers.go:580]     Audit-Id: 33ec9444-68be-434e-8dc0-3ef86e499bbf
	I0816 06:00:46.196363    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.196380    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.196387    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.196397    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.196402    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.196875    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:46.197269    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:46.197279    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.197287    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.197291    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.198719    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:46.198729    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.198734    4495 round_trippers.go:580]     Audit-Id: 9f2089a4-8222-4cd0-ba1c-08e5433423c1
	I0816 06:00:46.198737    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.198739    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.198742    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.198745    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.198747    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.198920    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:46.199101    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:46.695010    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:46.695033    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.695045    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.695052    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.697715    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:46.697730    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.697737    4495 round_trippers.go:580]     Audit-Id: a941da9d-2868-4a08-a222-7c84c0646131
	I0816 06:00:46.697743    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.697747    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.697751    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.697755    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.697759    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.698098    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:46.698495    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:46.698504    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:46.698512    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:46.698517    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:46.699909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:46.699919    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:46.699927    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:46.699932    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:46.699936    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:46.699938    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:46 GMT
	I0816 06:00:46.699950    4495 round_trippers.go:580]     Audit-Id: 2e04553b-144d-4c8f-b9f7-6c15ccc65554
	I0816 06:00:46.699954    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:46.700205    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1295","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5286 chars]
	I0816 06:00:47.194074    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:47.194102    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.194114    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.194119    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.197460    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:47.197476    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.197483    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.197488    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.197493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.197497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.197500    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.197513    4495 round_trippers.go:580]     Audit-Id: a7c47070-3a2a-4b06-871c-46f540812c9a
	I0816 06:00:47.197792    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:47.198182    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:47.198192    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.198200    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.198203    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.200341    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:47.200353    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.200362    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.200367    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.200371    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.200375    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.200377    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.200379    4495 round_trippers.go:580]     Audit-Id: 0f8a5e3c-b81c-4d45-b94f-8965cb70e2f0
	I0816 06:00:47.200635    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:47.693737    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:47.693754    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.693762    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.693766    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.695837    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:47.695850    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.695857    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.695863    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.695869    4495 round_trippers.go:580]     Audit-Id: efbcbcb7-7e24-4177-bac6-452bf2000f90
	I0816 06:00:47.695872    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.695878    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.695882    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.695994    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:47.696296    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:47.696303    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:47.696309    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:47.696331    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:47.697653    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:47.697659    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:47.697664    4495 round_trippers.go:580]     Audit-Id: f3478541-8b91-4cba-b592-4cc42c120e6c
	I0816 06:00:47.697667    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:47.697670    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:47.697674    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:47.697678    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:47.697682    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:47 GMT
	I0816 06:00:47.697844    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:48.193093    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:48.193121    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.193132    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.193136    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.195878    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:48.195893    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.195900    4495 round_trippers.go:580]     Audit-Id: 5b2473fa-d9f6-4df6-86a7-ee5024917520
	I0816 06:00:48.195904    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.195909    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.195912    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.195917    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.195921    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.196812    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:48.197831    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:48.197840    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.197847    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.197851    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.199214    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:48.199225    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.199230    4495 round_trippers.go:580]     Audit-Id: a0088d80-1060-4de4-8323-23148ce54ae8
	I0816 06:00:48.199233    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.199235    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.199237    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.199240    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.199242    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.199308    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:48.199512    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:48.694951    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:48.694979    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.694998    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.695036    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.697668    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:48.697681    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.697688    4495 round_trippers.go:580]     Audit-Id: dcfb047e-65f4-4a55-a55c-8cf446c43fd1
	I0816 06:00:48.697693    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.697700    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.697705    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.697710    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.697714    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.697825    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:48.698194    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:48.698203    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:48.698211    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:48.698218    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:48.699789    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:48.699818    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:48.699828    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:48.699843    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:48 GMT
	I0816 06:00:48.699850    4495 round_trippers.go:580]     Audit-Id: 3f5c629b-235c-489e-8403-ecf62ef91337
	I0816 06:00:48.699853    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:48.699855    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:48.699858    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:48.699916    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:49.193572    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:49.193600    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.193611    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.193618    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.196346    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:49.196379    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.196399    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.196419    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.196444    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.196453    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.196457    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.196460    4495 round_trippers.go:580]     Audit-Id: d89ff074-c96d-4832-986d-80d8732b9f71
	I0816 06:00:49.196623    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:49.196993    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:49.197002    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.197010    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.197014    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.198352    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:49.198361    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.198368    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.198373    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.198378    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.198382    4495 round_trippers.go:580]     Audit-Id: 76d08124-64a9-4ca8-8d3b-8d82f00657af
	I0816 06:00:49.198387    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.198392    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.198556    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:49.693984    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:49.694009    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.694018    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.694025    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.696844    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:49.696860    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.696868    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.696873    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.696877    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.696881    4495 round_trippers.go:580]     Audit-Id: db944d8d-52d6-4dbf-b101-043bff7ea90c
	I0816 06:00:49.696886    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.696890    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.697015    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:49.697386    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:49.697395    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:49.697403    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:49.697429    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:49.698921    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:49.698929    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:49.698934    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:49.698937    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:49.698940    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:49.698942    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:49 GMT
	I0816 06:00:49.698945    4495 round_trippers.go:580]     Audit-Id: eb99aa58-e1f9-4954-bd62-6c0c0bc2dbaa
	I0816 06:00:49.698948    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:49.699094    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.193122    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:50.193150    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.193161    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.193167    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.195943    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:50.195961    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.195969    4495 round_trippers.go:580]     Audit-Id: dede90ac-f9e8-4e5d-a71c-343bccbefb69
	I0816 06:00:50.195974    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.195977    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.196004    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.196012    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.196018    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.196121    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:50.196494    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:50.196504    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.196512    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.196526    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.197909    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:50.197918    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.197923    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.197925    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.197927    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.197930    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.197933    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.197936    4495 round_trippers.go:580]     Audit-Id: e3821546-56c4-429d-a7ca-409ed0b5c376
	I0816 06:00:50.198094    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.694562    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:50.694590    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.694602    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.694608    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.697242    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:50.697257    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.697265    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.697270    4495 round_trippers.go:580]     Audit-Id: e2e06ee0-c869-4597-bf8b-da844e8a006b
	I0816 06:00:50.697274    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.697278    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.697282    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.697285    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.697524    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:50.697896    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:50.697906    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:50.697914    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:50.697919    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:50.699459    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:50.699468    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:50.699476    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:50.699495    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:50.699504    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:50.699521    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:50 GMT
	I0816 06:00:50.699527    4495 round_trippers.go:580]     Audit-Id: ea89083e-33ba-4f08-a0f3-e5d1464b020e
	I0816 06:00:50.699529    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:50.699849    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:50.700021    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:51.193059    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:51.193081    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.193092    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.193098    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.195867    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:51.195880    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.195888    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.195893    4495 round_trippers.go:580]     Audit-Id: b4125754-b93d-411d-9ced-26b31f4bd1d4
	I0816 06:00:51.195897    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.195901    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.195904    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.195908    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.196060    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:51.196429    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:51.196438    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.196446    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.196451    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.197916    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:51.197927    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.197934    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.197940    4495 round_trippers.go:580]     Audit-Id: 26bdde0b-f5e4-4d5e-a93b-8cb8802fe6f6
	I0816 06:00:51.197944    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.197948    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.197951    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.197955    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.198186    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:51.693604    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:51.693626    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.693638    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.693647    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.696401    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:51.696419    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.696430    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.696438    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.696443    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.696465    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.696474    4495 round_trippers.go:580]     Audit-Id: 6089f18d-0734-4927-9662-70ea2b4f65c0
	I0816 06:00:51.696477    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.696769    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:51.697136    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:51.697146    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:51.697154    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:51.697160    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:51.698713    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:51.698721    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:51.698727    4495 round_trippers.go:580]     Audit-Id: 2be2a0c6-6e5d-435e-98ca-3cd9328ce88e
	I0816 06:00:51.698732    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:51.698749    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:51.698753    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:51.698756    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:51.698758    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:51 GMT
	I0816 06:00:51.698866    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:52.193095    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:52.193123    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.193135    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.193144    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.196556    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:52.196573    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.196580    4495 round_trippers.go:580]     Audit-Id: 5c23a1f6-b54d-43e5-b2a5-4f0333cc8019
	I0816 06:00:52.196585    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.196613    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.196622    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.196626    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.196629    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.196778    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:52.197152    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:52.197162    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.197171    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.197186    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.198650    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:52.198659    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.198663    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.198668    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.198673    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.198678    4495 round_trippers.go:580]     Audit-Id: daf0ad65-6369-4382-9cd9-7d81c8a90d71
	I0816 06:00:52.198681    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.198684    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.198798    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:52.694180    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:52.694202    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.694216    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.694221    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.696755    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:52.696769    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.696776    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.696794    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.696798    4495 round_trippers.go:580]     Audit-Id: 91a685d0-8bda-4d82-b91a-1f01d8f9479f
	I0816 06:00:52.696802    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.696805    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.696809    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.696916    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:52.697297    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:52.697312    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:52.697321    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:52.697326    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:52.699106    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:52.699121    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:52.699128    4495 round_trippers.go:580]     Audit-Id: a0dfb14e-e815-43fb-830c-1d068661317a
	I0816 06:00:52.699135    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:52.699140    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:52.699143    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:52.699146    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:52.699149    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:52 GMT
	I0816 06:00:52.699221    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:53.194387    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:53.194413    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.194425    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.194431    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.197272    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:53.197291    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.197299    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.197306    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.197313    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.197318    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.197323    4495 round_trippers.go:580]     Audit-Id: 61747d54-260b-44e3-a41a-4a5d32cdb57d
	I0816 06:00:53.197328    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.197516    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:53.197897    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:53.197907    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.197915    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.197920    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.199434    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:53.199443    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.199447    4495 round_trippers.go:580]     Audit-Id: 49a5e8ba-a111-4faf-87fd-2774dd7befa8
	I0816 06:00:53.199449    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.199452    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.199455    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.199457    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.199459    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.199524    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:53.199691    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:53.693934    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:53.693955    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.693966    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.693974    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.696451    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:53.696465    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.696475    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.696480    4495 round_trippers.go:580]     Audit-Id: 97031ce4-149d-4698-8484-fbd4e6766633
	I0816 06:00:53.696486    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.696489    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.696493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.696497    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.696797    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:53.697162    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:53.697170    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:53.697175    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:53.697180    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:53.698224    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:53.698233    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:53.698237    4495 round_trippers.go:580]     Audit-Id: f14f9405-50a7-4668-8bc8-23ee75a08697
	I0816 06:00:53.698240    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:53.698244    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:53.698249    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:53.698252    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:53.698255    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:53 GMT
	I0816 06:00:53.698406    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:54.192926    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:54.192950    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.192962    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.192968    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.195264    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:54.195276    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.195283    4495 round_trippers.go:580]     Audit-Id: 35341799-17d6-4c03-a451-30b6b32c071d
	I0816 06:00:54.195289    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.195294    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.195300    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.195304    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.195308    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.195400    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:54.195765    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:54.195774    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.195782    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.195786    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.197260    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.197269    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.197274    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.197277    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.197280    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.197284    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.197287    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.197289    4495 round_trippers.go:580]     Audit-Id: aa30861c-6892-49f9-ba64-5ee6d5cb60c4
	I0816 06:00:54.197343    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:54.692787    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:54.692800    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.692807    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.692810    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.694497    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.694507    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.694512    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.694515    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.694518    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.694520    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.694523    4495 round_trippers.go:580]     Audit-Id: 9bbfb7f9-5e9f-44ba-bbda-b0e0a42df8a5
	I0816 06:00:54.694526    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.694614    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:54.694910    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:54.694917    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:54.694923    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:54.694926    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:54.696051    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:54.696061    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:54.696067    4495 round_trippers.go:580]     Audit-Id: 1920a61c-7534-468f-8dc2-a357ba5ebf92
	I0816 06:00:54.696072    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:54.696075    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:54.696080    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:54.696085    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:54.696089    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:54 GMT
	I0816 06:00:54.696337    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.192902    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:55.192925    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.192937    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.192943    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.195900    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:55.195919    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.195926    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.195931    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.195934    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.195939    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.195964    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.195972    4495 round_trippers.go:580]     Audit-Id: cf3f6819-6604-4fa8-985b-301e5c71d88d
	I0816 06:00:55.196184    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:55.196552    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:55.196562    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.196570    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.196575    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.197904    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.197912    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.197917    4495 round_trippers.go:580]     Audit-Id: 04a0a47c-a7ee-4a21-a6ad-e505f4196893
	I0816 06:00:55.197920    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.197938    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.197945    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.197950    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.197955    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.198184    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.692829    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:55.692844    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.692850    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.692855    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.694681    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.694689    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.694694    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.694697    4495 round_trippers.go:580]     Audit-Id: 5f4f3be9-7ccc-4d14-9fc6-7cb349bfb7b6
	I0816 06:00:55.694699    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.694704    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.694707    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.694710    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.694825    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:55.695111    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:55.695118    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:55.695124    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:55.695129    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:55.696315    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:55.696322    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:55.696328    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:55.696332    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:55 GMT
	I0816 06:00:55.696338    4495 round_trippers.go:580]     Audit-Id: 1c2c4641-b582-4b29-ab5d-f92f58f250c4
	I0816 06:00:55.696347    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:55.696354    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:55.696364    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:55.696640    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:55.696816    4495 pod_ready.go:103] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"False"
	I0816 06:00:56.194306    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:56.194326    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.194338    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.194344    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.197337    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:56.197352    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.197362    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.197371    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.197378    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.197383    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.197388    4495 round_trippers.go:580]     Audit-Id: cecd2419-a35c-4a96-bbef-b6262ad05886
	I0816 06:00:56.197394    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.197782    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:56.198082    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:56.198090    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.198096    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.198099    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.199288    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:56.199296    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.199300    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.199318    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.199323    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.199326    4495 round_trippers.go:580]     Audit-Id: 341c369b-d4ef-4095-87e9-9181a6560b9e
	I0816 06:00:56.199329    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.199331    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.199700    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:56.693955    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:56.693979    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.693991    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.693999    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.696425    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:56.696442    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.696452    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.696458    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.696466    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.696474    4495 round_trippers.go:580]     Audit-Id: ba031bef-2cc6-493c-b2b4-1647f1036dcf
	I0816 06:00:56.696480    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.696484    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.696919    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1185","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7093 chars]
	I0816 06:00:56.697287    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:56.697296    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:56.697304    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:56.697308    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:56.698764    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:56.698772    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:56.698777    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:56.698781    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:56.698783    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:56.698788    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:56.698792    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:56 GMT
	I0816 06:00:56.698795    4495 round_trippers.go:580]     Audit-Id: 5d7dfbf4-db55-42e6-a1c1-23c5c051ee87
	I0816 06:00:56.698979    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.193086    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-qvlc2
	I0816 06:00:57.193109    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.193121    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.193128    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.195756    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:57.195770    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.195779    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.195784    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.195788    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.195792    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.195796    4495 round_trippers.go:580]     Audit-Id: 7bfa9069-4ac3-4fbf-b1fa-4ad513a19ede
	I0816 06:00:57.195799    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.195938    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7040 chars]
	I0816 06:00:57.196320    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.196327    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.196332    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.196365    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.197653    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.197665    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.197671    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.197675    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.197679    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.197682    4495 round_trippers.go:580]     Audit-Id: 7ca4b854-8de7-4028-8dea-80170976795b
	I0816 06:00:57.197685    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.197688    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.197769    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.198023    4495 pod_ready.go:93] pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.198031    4495 pod_ready.go:82] duration metric: took 13.005479196s for pod "coredns-6f6b679f8f-qvlc2" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.198051    4495 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.198102    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-120000
	I0816 06:00:57.198106    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.198112    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.198117    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.199301    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.199312    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.199319    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.199322    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.199326    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.199329    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.199334    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.199337    4495 round_trippers.go:580]     Audit-Id: 776899db-76a1-4520-91dc-1a232042cc6b
	I0816 06:00:57.199529    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-120000","namespace":"kube-system","uid":"f939a427-2f57-47e3-9426-ff75932f1ecb","resourceVersion":"1278","creationTimestamp":"2024-08-16T12:54:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.14:2379","kubernetes.io/config.hash":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.mirror":"41683de9d74221749efa0bc640284da9","kubernetes.io/config.seen":"2024-08-16T12:54:22.936335857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6664 chars]
	I0816 06:00:57.199751    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.199758    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.199763    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.199767    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.200815    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.200824    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.200829    4495 round_trippers.go:580]     Audit-Id: c1eb6eb0-9fdb-4aca-9fe0-ec00ecb240a4
	I0816 06:00:57.200833    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.200836    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.200839    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.200843    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.200847    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.201128    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.201301    4495 pod_ready.go:93] pod "etcd-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.201309    4495 pod_ready.go:82] duration metric: took 3.253593ms for pod "etcd-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.201318    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.201346    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-120000
	I0816 06:00:57.201351    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.201356    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.201359    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.202430    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.202441    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.202449    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.202455    4495 round_trippers.go:580]     Audit-Id: 131388fe-c69c-4328-b724-70219eb7e2cb
	I0816 06:00:57.202460    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.202463    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.202468    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.202471    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.202638    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-120000","namespace":"kube-system","uid":"6811daff-acfb-4752-939b-3d084a8a4c9a","resourceVersion":"1282","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.14:8443","kubernetes.io/config.hash":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.mirror":"981839c39d6cef70ec84c36336bc096c","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479305Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0816 06:00:57.202863    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.202871    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.202876    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.202879    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.203876    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:57.203887    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.203895    4495 round_trippers.go:580]     Audit-Id: b14c3d4e-a101-4d94-a887-2d4e9659b31b
	I0816 06:00:57.203899    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.203912    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.203919    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.203922    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.203925    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.204052    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.204213    4495 pod_ready.go:93] pod "kube-apiserver-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.204221    4495 pod_ready.go:82] duration metric: took 2.89703ms for pod "kube-apiserver-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.204227    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.204253    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-120000
	I0816 06:00:57.204258    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.204263    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.204267    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.205301    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.205308    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.205312    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.205315    4495 round_trippers.go:580]     Audit-Id: 0433aad9-f030-4cdf-9ae4-34a226c8e7d5
	I0816 06:00:57.205317    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.205321    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.205325    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.205328    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.205450    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-120000","namespace":"kube-system","uid":"67f0047c-62f5-4c90-bee3-40dc18cb33e6","resourceVersion":"1285","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.mirror":"d2e65090a9ffd50a73432dad0e75d109","kubernetes.io/config.seen":"2024-08-16T12:54:27.908479986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0816 06:00:57.205670    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.205677    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.205683    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.205685    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.206873    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.206881    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.206886    4495 round_trippers.go:580]     Audit-Id: 93d1a138-a7da-4caf-aff2-aab72c501a4a
	I0816 06:00:57.206890    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.206895    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.206897    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.206900    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.206903    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.207290    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.207448    4495 pod_ready.go:93] pod "kube-controller-manager-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.207456    4495 pod_ready.go:82] duration metric: took 3.223772ms for pod "kube-controller-manager-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.207463    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.207489    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-msbdc
	I0816 06:00:57.207494    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.207499    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.207504    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.208473    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:57.208480    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.208487    4495 round_trippers.go:580]     Audit-Id: 3c061cea-7d11-4206-b7c9-36c8140e2cd9
	I0816 06:00:57.208494    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.208498    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.208503    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.208506    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.208509    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.208608    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-msbdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"2dff96db-7737-4e41-a130-a356e3acfd78","resourceVersion":"1263","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6395 chars]
	I0816 06:00:57.208841    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:57.208848    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.208854    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.208858    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.209894    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.209902    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.209907    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.209912    4495 round_trippers.go:580]     Audit-Id: a45d4e26-0e90-4305-a879-912117dbd94d
	I0816 06:00:57.209919    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.209922    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.209926    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.209929    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.210022    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:57.210179    4495 pod_ready.go:93] pod "kube-proxy-msbdc" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.210186    4495 pod_ready.go:82] duration metric: took 2.718202ms for pod "kube-proxy-msbdc" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.210201    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.393284    4495 request.go:632] Waited for 183.014885ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:57.393390    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vskxm
	I0816 06:00:57.393402    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.393413    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.393421    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.396951    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:57.396967    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.396974    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.396979    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.396982    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.396986    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.396991    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.396995    4495 round_trippers.go:580]     Audit-Id: bbe21f71-61d2-430f-953b-e7b8093dcfd4
	I0816 06:00:57.397075    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vskxm","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9b8ca3d-b5bd-4c44-8579-8b31879629ad","resourceVersion":"1104","creationTimestamp":"2024-08-16T12:56:05Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:56:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:57.595047    4495 request.go:632] Waited for 197.620166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:57.595174    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m03
	I0816 06:00:57.595186    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.595196    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.595206    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.598193    4495 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0816 06:00:57.598205    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.598212    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.598219    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.598224    4495 round_trippers.go:580]     Content-Length: 210
	I0816 06:00:57.598228    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.598231    4495 round_trippers.go:580]     Audit-Id: e6c1db4f-035f-4dcb-b03e-b514ca813439
	I0816 06:00:57.598234    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.598237    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.598250    4495 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-120000-m03\" not found","reason":"NotFound","details":{"name":"multinode-120000-m03","kind":"nodes"},"code":404}
	I0816 06:00:57.598310    4495 pod_ready.go:98] node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:57.598322    4495 pod_ready.go:82] duration metric: took 388.121491ms for pod "kube-proxy-vskxm" in "kube-system" namespace to be "Ready" ...
	E0816 06:00:57.598331    4495 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-120000-m03" hosting pod "kube-proxy-vskxm" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-120000-m03": nodes "multinode-120000-m03" not found
	I0816 06:00:57.598339    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.795148    4495 request.go:632] Waited for 196.68935ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:57.795198    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x88cp
	I0816 06:00:57.795206    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.795217    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.795226    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.798419    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:57.798436    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.798443    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.798447    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.798450    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:57 GMT
	I0816 06:00:57.798454    4495 round_trippers.go:580]     Audit-Id: d1ed5234-af59-46c2-88e2-a2256bd63004
	I0816 06:00:57.798457    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.798465    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.798600    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x88cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"21efba47-35db-47ba-ace5-119b04bf7355","resourceVersion":"1001","creationTimestamp":"2024-08-16T12:55:15Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e20e4ee7-fb17-4df7-a693-2f78364d08f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:55:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e20e4ee7-fb17-4df7-a693-2f78364d08f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0816 06:00:57.994207    4495 request.go:632] Waited for 195.273451ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:57.994254    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000-m02
	I0816 06:00:57.994262    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:57.994271    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:57.994276    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:57.996238    4495 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0816 06:00:57.996252    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:57.996262    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:57.996266    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:57.996269    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:57.996273    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:57.996279    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:57.996286    4495 round_trippers.go:580]     Audit-Id: 2660763d-6ecb-494b-a793-54fd44f2fe86
	I0816 06:00:57.996431    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000-m02","uid":"57b3de1e-d3de-4534-9ecc-a0706c682584","resourceVersion":"1019","creationTimestamp":"2024-08-16T12:58:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_16T05_58_44_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:58:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3805 chars]
	I0816 06:00:57.996608    4495 pod_ready.go:93] pod "kube-proxy-x88cp" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:57.996617    4495 pod_ready.go:82] duration metric: took 398.279857ms for pod "kube-proxy-x88cp" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:57.996624    4495 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:58.194643    4495 request.go:632] Waited for 197.953694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:58.194771    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-120000
	I0816 06:00:58.194783    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.194797    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.194805    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.197487    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:58.197504    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.197511    4495 round_trippers.go:580]     Audit-Id: efed1839-ae04-4990-bf50-53ddeffece79
	I0816 06:00:58.197516    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.197520    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.197525    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.197528    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.197532    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.197622    4495 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-120000","namespace":"kube-system","uid":"b8188bb8-5278-422d-86a5-19d70c796638","resourceVersion":"1291","creationTimestamp":"2024-08-16T12:54:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.mirror":"f2ffa96461432294fd452f8f782e005b","kubernetes.io/config.seen":"2024-08-16T12:54:27.908480653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0816 06:00:58.393792    4495 request.go:632] Waited for 195.865798ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:58.393925    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes/multinode-120000
	I0816 06:00:58.393938    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.393950    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.393958    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.397065    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.397081    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.397088    4495 round_trippers.go:580]     Audit-Id: ec49ba35-97a7-49fd-be60-931320cb5ecb
	I0816 06:00:58.397093    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.397102    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.397110    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.397118    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.397145    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.397575    4495 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-16T12:54:25Z","fieldsType":"FieldsV1","f [truncated 5166 chars]
	I0816 06:00:58.397827    4495 pod_ready.go:93] pod "kube-scheduler-multinode-120000" in "kube-system" namespace has status "Ready":"True"
	I0816 06:00:58.397840    4495 pod_ready.go:82] duration metric: took 401.218068ms for pod "kube-scheduler-multinode-120000" in "kube-system" namespace to be "Ready" ...
	I0816 06:00:58.397849    4495 pod_ready.go:39] duration metric: took 14.210217723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 06:00:58.397864    4495 api_server.go:52] waiting for apiserver process to appear ...
	I0816 06:00:58.397928    4495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 06:00:58.410899    4495 command_runner.go:130] > 1735
	I0816 06:00:58.410960    4495 api_server.go:72] duration metric: took 30.977063274s to wait for apiserver process to appear ...
	I0816 06:00:58.410971    4495 api_server.go:88] waiting for apiserver healthz status ...
	I0816 06:00:58.410982    4495 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 06:00:58.414427    4495 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0816 06:00:58.414458    4495 round_trippers.go:463] GET https://192.169.0.14:8443/version
	I0816 06:00:58.414463    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.414469    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.414473    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.414934    4495 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0816 06:00:58.414944    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.414958    4495 round_trippers.go:580]     Audit-Id: 9dfd7757-f7bf-4d1d-b5df-89f0fddf7c8b
	I0816 06:00:58.414979    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.414985    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.414992    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.414996    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.414999    4495 round_trippers.go:580]     Content-Length: 263
	I0816 06:00:58.415002    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.415010    4495 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0816 06:00:58.415030    4495 api_server.go:141] control plane version: v1.31.0
	I0816 06:00:58.415038    4495 api_server.go:131] duration metric: took 4.062075ms to wait for apiserver health ...
	I0816 06:00:58.415044    4495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 06:00:58.594269    4495 request.go:632] Waited for 179.184796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.594362    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.594372    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.594383    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.594391    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.597820    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.597830    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.597835    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.597839    4495 round_trippers.go:580]     Audit-Id: 53dd41ea-d419-4412-b804-a5539fe60b44
	I0816 06:00:58.597841    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.597845    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.597849    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.597851    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.599096    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89335 chars]
	I0816 06:00:58.601012    4495 system_pods.go:59] 12 kube-system pods found
	I0816 06:00:58.601023    4495 system_pods.go:61] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running
	I0816 06:00:58.601026    4495 system_pods.go:61] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running
	I0816 06:00:58.601029    4495 system_pods.go:61] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:58.601032    4495 system_pods.go:61] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:58.601037    4495 system_pods.go:61] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running
	I0816 06:00:58.601040    4495 system_pods.go:61] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running
	I0816 06:00:58.601043    4495 system_pods.go:61] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running
	I0816 06:00:58.601046    4495 system_pods.go:61] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running
	I0816 06:00:58.601048    4495 system_pods.go:61] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:58.601051    4495 system_pods.go:61] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:58.601053    4495 system_pods.go:61] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running
	I0816 06:00:58.601058    4495 system_pods.go:61] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:58.601061    4495 system_pods.go:74] duration metric: took 186.018056ms to wait for pod list to return data ...
	I0816 06:00:58.601067    4495 default_sa.go:34] waiting for default service account to be created ...
	I0816 06:00:58.795127    4495 request.go:632] Waited for 193.965542ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0816 06:00:58.795226    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/default/serviceaccounts
	I0816 06:00:58.795237    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.795248    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.795255    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.798470    4495 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0816 06:00:58.798486    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.798493    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.798497    4495 round_trippers.go:580]     Content-Length: 262
	I0816 06:00:58.798500    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:58 GMT
	I0816 06:00:58.798503    4495 round_trippers.go:580]     Audit-Id: e8f9e561-60af-425b-a3f9-3b44a9fda1fd
	I0816 06:00:58.798506    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.798510    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.798528    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.798551    4495 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6cef175d-5a13-4cc4-a06f-ecf9ac67dfb6","resourceVersion":"361","creationTimestamp":"2024-08-16T12:54:33Z"}}]}
	I0816 06:00:58.798699    4495 default_sa.go:45] found service account: "default"
	I0816 06:00:58.798713    4495 default_sa.go:55] duration metric: took 197.643778ms for default service account to be created ...
	I0816 06:00:58.798721    4495 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 06:00:58.993428    4495 request.go:632] Waited for 194.666557ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.993458    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/namespaces/kube-system/pods
	I0816 06:00:58.993463    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:58.993469    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:58.993474    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:58.999368    4495 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0816 06:00:58.999379    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:58.999384    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:58.999389    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:59 GMT
	I0816 06:00:58.999393    4495 round_trippers.go:580]     Audit-Id: 6e65c80c-c099-4005-ade4-9a18d234dfc8
	I0816 06:00:58.999397    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:58.999402    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:58.999406    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:58.999984    4495 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-qvlc2","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"08cca513-a37c-44f0-b558-30530308cb3f","resourceVersion":"1320","creationTimestamp":"2024-08-16T12:54:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"353e11cf-16a7-47e1-a8a6-41acfff87a32","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-16T12:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"353e11cf-16a7-47e1-a8a6-41acfff87a32\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89335 chars]
	I0816 06:00:59.001890    4495 system_pods.go:86] 12 kube-system pods found
	I0816 06:00:59.001900    4495 system_pods.go:89] "coredns-6f6b679f8f-qvlc2" [08cca513-a37c-44f0-b558-30530308cb3f] Running
	I0816 06:00:59.001904    4495 system_pods.go:89] "etcd-multinode-120000" [f939a427-2f57-47e3-9426-ff75932f1ecb] Running
	I0816 06:00:59.001908    4495 system_pods.go:89] "kindnet-gxqsm" [00445af6-3ec4-494a-8197-1a980b6e1dfa] Running
	I0816 06:00:59.001912    4495 system_pods.go:89] "kindnet-lww85" [b95ff52e-8f48-4c77-9cdb-d3866c2552f6] Running
	I0816 06:00:59.001915    4495 system_pods.go:89] "kindnet-wd2x6" [7fd57563-897b-45cb-825b-e202994dcc34] Running
	I0816 06:00:59.001918    4495 system_pods.go:89] "kube-apiserver-multinode-120000" [6811daff-acfb-4752-939b-3d084a8a4c9a] Running
	I0816 06:00:59.001922    4495 system_pods.go:89] "kube-controller-manager-multinode-120000" [67f0047c-62f5-4c90-bee3-40dc18cb33e6] Running
	I0816 06:00:59.001925    4495 system_pods.go:89] "kube-proxy-msbdc" [2dff96db-7737-4e41-a130-a356e3acfd78] Running
	I0816 06:00:59.001928    4495 system_pods.go:89] "kube-proxy-vskxm" [b9b8ca3d-b5bd-4c44-8579-8b31879629ad] Running
	I0816 06:00:59.001931    4495 system_pods.go:89] "kube-proxy-x88cp" [21efba47-35db-47ba-ace5-119b04bf7355] Running
	I0816 06:00:59.001934    4495 system_pods.go:89] "kube-scheduler-multinode-120000" [b8188bb8-5278-422d-86a5-19d70c796638] Running
	I0816 06:00:59.001939    4495 system_pods.go:89] "storage-provisioner" [03776551-6bfa-4cdb-a48f-b32c38e3f900] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 06:00:59.001944    4495 system_pods.go:126] duration metric: took 203.222577ms to wait for k8s-apps to be running ...
	I0816 06:00:59.001953    4495 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 06:00:59.002005    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 06:00:59.013228    4495 system_svc.go:56] duration metric: took 11.274015ms WaitForService to wait for kubelet
	I0816 06:00:59.013241    4495 kubeadm.go:582] duration metric: took 31.579356247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:00:59.013257    4495 node_conditions.go:102] verifying NodePressure condition ...
	I0816 06:00:59.193988    4495 request.go:632] Waited for 180.658554ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:59.194062    4495 round_trippers.go:463] GET https://192.169.0.14:8443/api/v1/nodes
	I0816 06:00:59.194070    4495 round_trippers.go:469] Request Headers:
	I0816 06:00:59.194081    4495 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0816 06:00:59.194088    4495 round_trippers.go:473]     Accept: application/json, */*
	I0816 06:00:59.196901    4495 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0816 06:00:59.196917    4495 round_trippers.go:577] Response Headers:
	I0816 06:00:59.196924    4495 round_trippers.go:580]     Cache-Control: no-cache, private
	I0816 06:00:59.196929    4495 round_trippers.go:580]     Content-Type: application/json
	I0816 06:00:59.196956    4495 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 23f7687a-a30f-4ede-a906-30c73a3a118a
	I0816 06:00:59.196965    4495 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 72f2de15-214f-4bcf-bb18-42ad45257c4b
	I0816 06:00:59.196971    4495 round_trippers.go:580]     Date: Fri, 16 Aug 2024 13:00:59 GMT
	I0816 06:00:59.196975    4495 round_trippers.go:580]     Audit-Id: aba56bfc-9e40-433b-a514-9d8e27ae8f86
	I0816 06:00:59.197113    4495 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1324"},"items":[{"metadata":{"name":"multinode-120000","uid":"e50563ad-fef7-40e8-87ea-8f8aa15409c8","resourceVersion":"1297","creationTimestamp":"2024-08-16T12:54:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-120000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab84f9bc76071a77c857a14f5c66dccc01002b05","minikube.k8s.io/name":"multinode-120000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_16T05_54_28_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10017 chars]
	I0816 06:00:59.197496    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:59.197507    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:59.197516    4495 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0816 06:00:59.197521    4495 node_conditions.go:123] node cpu capacity is 2
	I0816 06:00:59.197526    4495 node_conditions.go:105] duration metric: took 184.267075ms to run NodePressure ...
	I0816 06:00:59.197536    4495 start.go:241] waiting for startup goroutines ...
	I0816 06:00:59.197543    4495 start.go:246] waiting for cluster config update ...
	I0816 06:00:59.197552    4495 start.go:255] writing updated cluster config ...
	I0816 06:00:59.219582    4495 out.go:201] 
	I0816 06:00:59.241519    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:59.241632    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.264316    4495 out.go:177] * Starting "multinode-120000-m02" worker node in "multinode-120000" cluster
	I0816 06:00:59.306184    4495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:00:59.306218    4495 cache.go:56] Caching tarball of preloaded images
	I0816 06:00:59.306418    4495 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:00:59.306436    4495 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:00:59.306546    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.307365    4495 start.go:360] acquireMachinesLock for multinode-120000-m02: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:00:59.307481    4495 start.go:364] duration metric: took 91.575µs to acquireMachinesLock for "multinode-120000-m02"
	I0816 06:00:59.307508    4495 start.go:96] Skipping create...Using existing machine configuration
	I0816 06:00:59.307515    4495 fix.go:54] fixHost starting: m02
	I0816 06:00:59.307945    4495 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:59.307981    4495 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:59.317188    4495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53314
	I0816 06:00:59.317569    4495 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:59.317913    4495 main.go:141] libmachine: Using API Version  1
	I0816 06:00:59.317923    4495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:59.318122    4495 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:59.318232    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:00:59.318317    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetState
	I0816 06:00:59.318397    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.318479    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4443
	I0816 06:00:59.319411    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid 4443 missing from process table
	I0816 06:00:59.319466    4495 fix.go:112] recreateIfNeeded on multinode-120000-m02: state=Stopped err=<nil>
	I0816 06:00:59.319507    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	W0816 06:00:59.319598    4495 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 06:00:59.342208    4495 out.go:177] * Restarting existing hyperkit VM for "multinode-120000-m02" ...
	I0816 06:00:59.363128    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .Start
	I0816 06:00:59.363436    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.363488    4495 main.go:141] libmachine: (multinode-120000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid
	I0816 06:00:59.365357    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid 4443 missing from process table
	I0816 06:00:59.365375    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | pid 4443 is in state "Stopped"
	I0816 06:00:59.365403    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid...
	I0816 06:00:59.365644    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Using UUID ee85a2c3-93d0-4de0-ac93-052eb9962a60
	I0816 06:00:59.392192    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Generated MAC fa:8b:6e:be:7a:d1
	I0816 06:00:59.392215    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000
	I0816 06:00:59.392328    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee85a2c3-93d0-4de0-ac93-052eb9962a60", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0816 06:00:59.392369    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ee85a2c3-93d0-4de0-ac93-052eb9962a60", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1b30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0816 06:00:59.392424    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ee85a2c3-93d0-4de0-ac93-052eb9962a60", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/multinode-120000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage,/Users/j
enkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"}
	I0816 06:00:59.392465    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ee85a2c3-93d0-4de0-ac93-052eb9962a60 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/multinode-120000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/mult
inode-120000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-120000"
	I0816 06:00:59.392475    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:00:59.393842    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 DEBUG: hyperkit: Pid is 4816
	I0816 06:00:59.394301    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Attempt 0
	I0816 06:00:59.394311    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:59.394408    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4816
	I0816 06:00:59.396517    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Searching for fa:8b:6e:be:7a:d1 in /var/db/dhcpd_leases ...
	I0816 06:00:59.396631    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0816 06:00:59.396665    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:00:59.396689    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:00:59.396706    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66c09e76}
	I0816 06:00:59.396723    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | Found match: fa:8b:6e:be:7a:d1
	I0816 06:00:59.396735    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | IP: 192.169.0.15
	I0816 06:00:59.396766    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetConfigRaw
	I0816 06:00:59.397508    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:00:59.397686    4495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/multinode-120000/config.json ...
	I0816 06:00:59.398150    4495 machine.go:93] provisionDockerMachine start ...
	I0816 06:00:59.398161    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:00:59.398280    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:00:59.398384    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:00:59.398487    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:00:59.398584    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:00:59.398663    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:00:59.398779    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:00:59.398936    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:00:59.398947    4495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 06:00:59.401869    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:00:59.409985    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:00:59.410992    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:59.411010    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:59.411040    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:59.411055    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:59.797737    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:00:59.797763    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:00:59.912467    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:00:59.912494    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:00:59.912503    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:00:59.912510    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:00:59.913309    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:00:59.913319    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:00:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:01:05.478997    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0816 06:01:05.479099    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0816 06:01:05.479115    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0816 06:01:05.504217    4495 main.go:141] libmachine: (multinode-120000-m02) DBG | 2024/08/16 06:01:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0816 06:01:10.467844    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 06:01:10.467861    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.467994    4495 buildroot.go:166] provisioning hostname "multinode-120000-m02"
	I0816 06:01:10.468006    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.468090    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.468183    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.468288    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.468377    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.468462    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.468595    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.468749    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.468761    4495 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-120000-m02 && echo "multinode-120000-m02" | sudo tee /etc/hostname
	I0816 06:01:10.542740    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-120000-m02
	
	I0816 06:01:10.542760    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.542891    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.542994    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.543103    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.543188    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.543325    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.543468    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.543480    4495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-120000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-120000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-120000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 06:01:10.613856    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:01:10.613871    4495 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 06:01:10.613888    4495 buildroot.go:174] setting up certificates
	I0816 06:01:10.613895    4495 provision.go:84] configureAuth start
	I0816 06:01:10.613902    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetMachineName
	I0816 06:01:10.614033    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:01:10.614135    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.614217    4495 provision.go:143] copyHostCerts
	I0816 06:01:10.614244    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:01:10.614309    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 06:01:10.614321    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:01:10.614523    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 06:01:10.614724    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:01:10.614764    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 06:01:10.614769    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:01:10.614850    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 06:01:10.614989    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:01:10.615028    4495 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 06:01:10.615033    4495 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:01:10.615116    4495 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 06:01:10.615260    4495 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.multinode-120000-m02 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-120000-m02]
	I0816 06:01:10.752465    4495 provision.go:177] copyRemoteCerts
	I0816 06:01:10.752518    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 06:01:10.752532    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.752646    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.752746    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.752838    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.752935    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:10.792382    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0816 06:01:10.792452    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 06:01:10.811166    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0816 06:01:10.811231    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0816 06:01:10.830044    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0816 06:01:10.830105    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 06:01:10.848831    4495 provision.go:87] duration metric: took 234.932882ms to configureAuth
	I0816 06:01:10.848843    4495 buildroot.go:189] setting minikube options for container-runtime
	I0816 06:01:10.849004    4495 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:01:10.849017    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:10.849142    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.849233    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.849314    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.849399    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.849473    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.849586    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.849717    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.849725    4495 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 06:01:10.913790    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 06:01:10.913802    4495 buildroot.go:70] root file system type: tmpfs
	I0816 06:01:10.913891    4495 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 06:01:10.913901    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.914033    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.914130    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.914230    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.914313    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.914447    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.914588    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.914635    4495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 06:01:10.989565    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 06:01:10.989584    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:10.989736    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:10.989836    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.989914    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:10.990002    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:10.990140    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:10.990292    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:10.990304    4495 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 06:01:12.597650    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 06:01:12.597675    4495 machine.go:96] duration metric: took 13.199777318s to provisionDockerMachine
	I0816 06:01:12.597700    4495 start.go:293] postStartSetup for "multinode-120000-m02" (driver="hyperkit")
	I0816 06:01:12.597716    4495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 06:01:12.597730    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.597918    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 06:01:12.597930    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.598026    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.598112    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.598198    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.598279    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.641252    4495 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 06:01:12.645893    4495 command_runner.go:130] > NAME=Buildroot
	I0816 06:01:12.645902    4495 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0816 06:01:12.645906    4495 command_runner.go:130] > ID=buildroot
	I0816 06:01:12.645910    4495 command_runner.go:130] > VERSION_ID=2023.02.9
	I0816 06:01:12.645914    4495 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0816 06:01:12.646115    4495 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 06:01:12.646129    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 06:01:12.646249    4495 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 06:01:12.646427    4495 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 06:01:12.646433    4495 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> /etc/ssl/certs/15542.pem
	I0816 06:01:12.646648    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 06:01:12.657358    4495 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:01:12.689642    4495 start.go:296] duration metric: took 91.931255ms for postStartSetup
	I0816 06:01:12.689664    4495 fix.go:56] duration metric: took 13.382413227s for fixHost
	I0816 06:01:12.689724    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.689854    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.689949    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.690035    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.690112    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.690231    4495 main.go:141] libmachine: Using SSH client type: native
	I0816 06:01:12.690366    4495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb011ea0] 0xb014c00 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0816 06:01:12.690374    4495 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 06:01:12.754485    4495 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723813272.898141464
	
	I0816 06:01:12.754498    4495 fix.go:216] guest clock: 1723813272.898141464
	I0816 06:01:12.754503    4495 fix.go:229] Guest: 2024-08-16 06:01:12.898141464 -0700 PDT Remote: 2024-08-16 06:01:12.68967 -0700 PDT m=+72.289020142 (delta=208.471464ms)
	I0816 06:01:12.754516    4495 fix.go:200] guest clock delta is within tolerance: 208.471464ms
	I0816 06:01:12.754519    4495 start.go:83] releasing machines lock for "multinode-120000-m02", held for 13.447292745s
	I0816 06:01:12.754536    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.754672    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 06:01:12.775333    4495 out.go:177] * Found network options:
	I0816 06:01:12.796965    4495 out.go:177]   - NO_PROXY=192.169.0.14
	W0816 06:01:12.818178    4495 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 06:01:12.818216    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819064    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819327    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 06:01:12.819483    4495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 06:01:12.819528    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	W0816 06:01:12.819581    4495 proxy.go:119] fail to check proxy env: Error ip not in block
	I0816 06:01:12.819704    4495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 06:01:12.819720    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.819767    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 06:01:12.819917    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 06:01:12.819961    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.820120    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 06:01:12.820160    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.820334    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.820350    4495 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 06:01:12.820527    4495 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 06:01:12.856617    4495 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0816 06:01:12.856639    4495 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 06:01:12.856693    4495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 06:01:12.899107    4495 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0816 06:01:12.899982    4495 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0816 06:01:12.900016    4495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 06:01:12.900027    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:01:12.900139    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:01:12.916181    4495 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0816 06:01:12.916362    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 06:01:12.924577    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 06:01:12.932819    4495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 06:01:12.932863    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 06:01:12.941195    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:01:12.949373    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 06:01:12.957522    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:01:12.965815    4495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 06:01:12.974440    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 06:01:12.982832    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 06:01:12.991213    4495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 06:01:12.999286    4495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 06:01:13.006499    4495 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0816 06:01:13.006629    4495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 06:01:13.013965    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:01:13.112853    4495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 06:01:13.132768    4495 start.go:495] detecting cgroup driver to use...
	I0816 06:01:13.132836    4495 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 06:01:13.149276    4495 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0816 06:01:13.150775    4495 command_runner.go:130] > [Unit]
	I0816 06:01:13.150786    4495 command_runner.go:130] > Description=Docker Application Container Engine
	I0816 06:01:13.150792    4495 command_runner.go:130] > Documentation=https://docs.docker.com
	I0816 06:01:13.150797    4495 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0816 06:01:13.150802    4495 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0816 06:01:13.150812    4495 command_runner.go:130] > StartLimitBurst=3
	I0816 06:01:13.150816    4495 command_runner.go:130] > StartLimitIntervalSec=60
	I0816 06:01:13.150820    4495 command_runner.go:130] > [Service]
	I0816 06:01:13.150823    4495 command_runner.go:130] > Type=notify
	I0816 06:01:13.150826    4495 command_runner.go:130] > Restart=on-failure
	I0816 06:01:13.150832    4495 command_runner.go:130] > Environment=NO_PROXY=192.169.0.14
	I0816 06:01:13.150837    4495 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0816 06:01:13.150847    4495 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0816 06:01:13.150854    4495 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0816 06:01:13.150859    4495 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0816 06:01:13.150866    4495 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0816 06:01:13.150871    4495 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0816 06:01:13.150878    4495 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0816 06:01:13.150890    4495 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0816 06:01:13.150895    4495 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0816 06:01:13.150899    4495 command_runner.go:130] > ExecStart=
	I0816 06:01:13.150911    4495 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0816 06:01:13.150921    4495 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0816 06:01:13.150929    4495 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0816 06:01:13.150935    4495 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0816 06:01:13.150940    4495 command_runner.go:130] > LimitNOFILE=infinity
	I0816 06:01:13.150943    4495 command_runner.go:130] > LimitNPROC=infinity
	I0816 06:01:13.150948    4495 command_runner.go:130] > LimitCORE=infinity
	I0816 06:01:13.150954    4495 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0816 06:01:13.150959    4495 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0816 06:01:13.150963    4495 command_runner.go:130] > TasksMax=infinity
	I0816 06:01:13.150968    4495 command_runner.go:130] > TimeoutStartSec=0
	I0816 06:01:13.150974    4495 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0816 06:01:13.150979    4495 command_runner.go:130] > Delegate=yes
	I0816 06:01:13.150990    4495 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0816 06:01:13.151009    4495 command_runner.go:130] > KillMode=process
	I0816 06:01:13.151017    4495 command_runner.go:130] > [Install]
	I0816 06:01:13.151023    4495 command_runner.go:130] > WantedBy=multi-user.target
	I0816 06:01:13.151101    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:01:13.166056    4495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 06:01:13.185416    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:01:13.196151    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:01:13.206342    4495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 06:01:13.233286    4495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:01:13.244370    4495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:01:13.258918    4495 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0816 06:01:13.259169    4495 ssh_runner.go:195] Run: which cri-dockerd
	I0816 06:01:13.261923    4495 command_runner.go:130] > /usr/bin/cri-dockerd
	I0816 06:01:13.262087    4495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 06:01:13.269387    4495 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 06:01:13.282813    4495 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 06:01:13.380083    4495 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 06:01:13.480701    4495 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 06:01:13.480726    4495 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 06:01:13.495596    4495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:01:13.600122    4495 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 06:02:14.623949    4495 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0816 06:02:14.623964    4495 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0816 06:02:14.623974    4495 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.025042574s)
	I0816 06:02:14.624046    4495 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 06:02:14.633055    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0816 06:02:14.633068    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445128662Z" level=info msg="Starting up"
	I0816 06:02:14.633077    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445576424Z" level=info msg="containerd not running, starting managed containerd"
	I0816 06:02:14.633091    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.446087902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	I0816 06:02:14.633100    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.464562092Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0816 06:02:14.633110    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479466694Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0816 06:02:14.633122    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479531751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633131    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479594404Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0816 06:02:14.633143    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479629031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633154    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479842292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633164    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479889532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633183    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480015247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633208    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480066795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633226    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480105284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633237    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480134704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633249    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480284892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633260    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480518152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633274    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482158345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633284    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482227762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0816 06:02:14.633310    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482355246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0816 06:02:14.633322    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482401189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0816 06:02:14.633334    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482551004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0816 06:02:14.633342    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482610366Z" level=info msg="metadata content store policy set" policy=shared
	I0816 06:02:14.633350    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484743898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0816 06:02:14.633359    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484842901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0816 06:02:14.633368    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484892400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0816 06:02:14.633378    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484992184Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0816 06:02:14.633387    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485035944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0816 06:02:14.633396    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485102391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0816 06:02:14.633404    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485716230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0816 06:02:14.633413    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485838842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0816 06:02:14.633424    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485887463Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0816 06:02:14.633433    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485941187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0816 06:02:14.633444    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485983421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633456    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486017407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633467    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486071726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633476    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486113872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633485    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486150386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633495    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486191889Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633571    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486229406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633584    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486263661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0816 06:02:14.633593    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486305970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633602    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486413763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633611    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486510443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633622    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486666027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633631    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486744588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633640    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486783463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633650    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486821985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633659    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486859811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633668    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486892478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633678    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486925903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633687    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486956569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633696    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486987244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633705    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487017252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633714    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487049437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0816 06:02:14.633723    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487086389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633732    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487117852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633741    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487147113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633750    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487232935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0816 06:02:14.633762    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487282108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0816 06:02:14.633772    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487315003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0816 06:02:14.633845    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487367683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0816 06:02:14.633858    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487403326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0816 06:02:14.633868    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487433733Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0816 06:02:14.633876    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487462518Z" level=info msg="NRI interface is disabled by configuration."
	I0816 06:02:14.633885    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487688948Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0816 06:02:14.633893    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487784884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0816 06:02:14.633902    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487850681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0816 06:02:14.633910    4495 command_runner.go:130] > Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487886542Z" level=info msg="containerd successfully booted in 0.024053s"
	I0816 06:02:14.633918    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.473777953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0816 06:02:14.633926    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.495069807Z" level=info msg="Loading containers: start."
	I0816 06:02:14.633947    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.607134105Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0816 06:02:14.633958    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.664329023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0816 06:02:14.633971    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709750511Z" level=warning msg="error locating sandbox id be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c: sandbox be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c not found"
	I0816 06:02:14.633986    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709809833Z" level=warning msg="error locating sandbox id 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f: sandbox 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f not found"
	I0816 06:02:14.633994    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709978520Z" level=info msg="Loading containers: done."
	I0816 06:02:14.634003    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.716985320Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	I0816 06:02:14.634011    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.717159977Z" level=info msg="Daemon has completed initialization"
	I0816 06:02:14.634020    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738467303Z" level=info msg="API listen on /var/run/docker.sock"
	I0816 06:02:14.634026    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 systemd[1]: Started Docker Application Container Engine.
	I0816 06:02:14.634036    4495 command_runner.go:130] > Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738583174Z" level=info msg="API listen on [::]:2376"
	I0816 06:02:14.634045    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.756208041Z" level=info msg="Processing signal 'terminated'"
	I0816 06:02:14.634088    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757227405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0816 06:02:14.634098    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757420422Z" level=info msg="Daemon shutdown complete"
	I0816 06:02:14.634107    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757484340Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0816 06:02:14.634117    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757573969Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0816 06:02:14.634124    4495 command_runner.go:130] > Aug 16 13:01:13 multinode-120000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0816 06:02:14.634130    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0816 06:02:14.634136    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0816 06:02:14.634143    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0816 06:02:14.634153    4495 command_runner.go:130] > Aug 16 13:01:14 multinode-120000-m02 dockerd[910]: time="2024-08-16T13:01:14.796343068Z" level=info msg="Starting up"
	I0816 06:02:14.634163    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0816 06:02:14.634170    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0816 06:02:14.634179    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0816 06:02:14.634186    4495 command_runner.go:130] > Aug 16 13:02:14 multinode-120000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0816 06:02:14.660433    4495 out.go:201] 
	W0816 06:02:14.680488    4495 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 13:01:11 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445128662Z" level=info msg="Starting up"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.445576424Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 13:01:11 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:11.446087902Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.464562092Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479466694Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479531751Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479594404Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479629031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479842292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.479889532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480015247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480066795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480105284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480134704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480284892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.480518152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482158345Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482227762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482355246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482401189Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482551004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.482610366Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484743898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484842901Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484892400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.484992184Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485035944Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485102391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485716230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485838842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485887463Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485941187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.485983421Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486017407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486071726Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486113872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486150386Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486191889Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486229406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486263661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486305970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486413763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486510443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486666027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486744588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486783463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486821985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486859811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486892478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486925903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486956569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.486987244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487017252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487049437Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487086389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487117852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487147113Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487232935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487282108Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487315003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487367683Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487403326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487433733Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487462518Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487688948Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487784884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487850681Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 13:01:11 multinode-120000-m02 dockerd[495]: time="2024-08-16T13:01:11.487886542Z" level=info msg="containerd successfully booted in 0.024053s"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.473777953Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.495069807Z" level=info msg="Loading containers: start."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.607134105Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.664329023Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709750511Z" level=warning msg="error locating sandbox id be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c: sandbox be358ad042afd6ae6a70e1e3d1c973aab25220e5f47ec339f46faa60646cf58c not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709809833Z" level=warning msg="error locating sandbox id 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f: sandbox 4b5f4950088c1df8549d2d60656e515a0134aa2cd99eb50e776584c926b3719f not found"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.709978520Z" level=info msg="Loading containers: done."
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.716985320Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.717159977Z" level=info msg="Daemon has completed initialization"
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738467303Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 13:01:12 multinode-120000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 16 13:01:12 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:12.738583174Z" level=info msg="API listen on [::]:2376"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.756208041Z" level=info msg="Processing signal 'terminated'"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757227405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757420422Z" level=info msg="Daemon shutdown complete"
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757484340Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 13:01:13 multinode-120000-m02 dockerd[488]: time="2024-08-16T13:01:13.757573969Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 13:01:13 multinode-120000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 13:01:14 multinode-120000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:01:14 multinode-120000-m02 dockerd[910]: time="2024-08-16T13:01:14.796343068Z" level=info msg="Starting up"
	Aug 16 13:02:14 multinode-120000-m02 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 13:02:14 multinode-120000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 06:02:14.680575    4495 out.go:270] * 
	W0816 06:02:14.681774    4495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:02:14.723648    4495 out.go:201] 
	
	
	==> Docker <==
	Aug 16 13:00:54 multinode-120000 dockerd[917]: time="2024-08-16T13:00:54.803123851Z" level=info msg="ignoring event" container=fa1e3e4dc76d3592af93051afe252137aa27d3335100ded84abc222312840f44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 13:00:54 multinode-120000 dockerd[924]: time="2024-08-16T13:00:54.803737021Z" level=warning msg="cleaning up after shim disconnected" id=fa1e3e4dc76d3592af93051afe252137aa27d3335100ded84abc222312840f44 namespace=moby
	Aug 16 13:00:54 multinode-120000 dockerd[924]: time="2024-08-16T13:00:54.803819056Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.894894613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.895025525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.895054022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.895243509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.904630514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.904795843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.904872666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:55 multinode-120000 dockerd[924]: time="2024-08-16T13:00:55.905023618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:56 multinode-120000 cri-dockerd[1179]: time="2024-08-16T13:00:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c90b50a3782ce04ff9b1c93330629f358c57f9e9921fa3d083d219b439b58b12/resolv.conf as [nameserver 192.169.0.1]"
	Aug 16 13:00:56 multinode-120000 cri-dockerd[1179]: time="2024-08-16T13:00:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fc5f40264fbdb8aac55d7aecffcc8b0cf37184db8f6137bdbe10af0200d31d0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.147254895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.148363378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.148379055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.148814502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.193998333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.194095542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.194113754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:00:56 multinode-120000 dockerd[924]: time="2024-08-16T13:00:56.197842725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:01:23 multinode-120000 dockerd[924]: time="2024-08-16T13:01:23.046199898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 13:01:23 multinode-120000 dockerd[924]: time="2024-08-16T13:01:23.046265384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 13:01:23 multinode-120000 dockerd[924]: time="2024-08-16T13:01:23.046274565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 13:01:23 multinode-120000 dockerd[924]: time="2024-08-16T13:01:23.046650444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	68178494498cf       6e38f40d628db       53 seconds ago       Running             storage-provisioner       4                   fd7b5dcc2be4f       storage-provisioner
	df6ea3be94a56       8c811b4aec35f       About a minute ago   Running             busybox                   2                   3fc5f40264fbd       busybox-7dff88458-fqvsk
	900c0401934cc       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   c90b50a3782ce       coredns-6f6b679f8f-qvlc2
	5b1dc2fc2f96f       ad83b2ca7b09e       About a minute ago   Running             kube-proxy                2                   a367382ef70cb       kube-proxy-msbdc
	fa1e3e4dc76d3       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   fd7b5dcc2be4f       storage-provisioner
	9c30d88088d11       12968670680f4       About a minute ago   Running             kindnet-cni               2                   15eefc23684e7       kindnet-wd2x6
	307b9e0a93dfd       045733566833c       About a minute ago   Running             kube-controller-manager   2                   c3f00dcc8a197       kube-controller-manager-multinode-120000
	b3df4b553f6be       1766f54c897f0       About a minute ago   Running             kube-scheduler            2                   7504720acca67       kube-scheduler-multinode-120000
	20467f4274eae       2e96e5913fc06       About a minute ago   Running             etcd                      2                   09dd506adb983       etcd-multinode-120000
	e336fd330e3af       604f5db92eaa8       About a minute ago   Running             kube-apiserver            2                   260c4e5f84827       kube-apiserver-multinode-120000
	504928de6f8b8       8c811b4aec35f       3 minutes ago        Exited              busybox                   1                   8c42c50f4ac65       busybox-7dff88458-fqvsk
	856dd8770ce9d       cbb01a7bd410d       3 minutes ago        Exited              coredns                   1                   24fec6612d936       coredns-6f6b679f8f-qvlc2
	5ae7eceff6760       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   796b051433aaf       kindnet-wd2x6
	701ae173eac2a       ad83b2ca7b09e       4 minutes ago        Exited              kube-proxy                1                   5901c509532d8       kube-proxy-msbdc
	26d48b6ad6fb3       2e96e5913fc06       4 minutes ago        Exited              etcd                      1                   c6d3cc10ad7cd       etcd-multinode-120000
	157135701f7d7       1766f54c897f0       4 minutes ago        Exited              kube-scheduler            1                   cbed74cdc18ed       kube-scheduler-multinode-120000
	a5500cc4ab0ef       045733566833c       4 minutes ago        Exited              kube-controller-manager   1                   01366dfa40b19       kube-controller-manager-multinode-120000
	a92131c1b00a8       604f5db92eaa8       4 minutes ago        Exited              kube-apiserver            1                   df82653f7f9dc       kube-apiserver-multinode-120000
	
	
	==> coredns [856dd8770ce9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43035 - 16987 "HINFO IN 6809922861890503982.6901383521737132760. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011714326s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [900c0401934c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43375 - 15771 "HINFO IN 2073296210169865248.812710815555183981. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.005647243s
	
	
	==> describe nodes <==
	Name:               multinode-120000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-120000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-120000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T05_54_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:54:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-120000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 13:02:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 13:00:44 +0000   Fri, 16 Aug 2024 12:54:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 13:00:44 +0000   Fri, 16 Aug 2024 12:54:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 13:00:44 +0000   Fri, 16 Aug 2024 12:54:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 13:00:44 +0000   Fri, 16 Aug 2024 13:00:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-120000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 33a23bd837f8474c911cc6217705050b
	  System UUID:                3c9142c3-0000-0000-931c-22df86688b90
	  Boot ID:                    eddf0116-41c0-4631-ba2b-ac5041374c35
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fqvsk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-6f6b679f8f-qvlc2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m43s
	  kube-system                 etcd-multinode-120000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m49s
	  kube-system                 kindnet-wd2x6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m43s
	  kube-system                 kube-apiserver-multinode-120000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-controller-manager-multinode-120000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-proxy-msbdc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 kube-scheduler-multinode-120000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m42s                  kube-proxy       
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 4m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m53s (x8 over 7m54s)  kubelet          Node multinode-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s (x7 over 7m54s)  kubelet          Node multinode-120000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m53s (x8 over 7m54s)  kubelet          Node multinode-120000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m48s                  kubelet          Node multinode-120000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m48s                  kubelet          Node multinode-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m48s                  kubelet          Node multinode-120000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m44s                  node-controller  Node multinode-120000 event: Registered Node multinode-120000 in Controller
	  Normal  NodeReady                7m24s                  kubelet          Node multinode-120000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node multinode-120000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node multinode-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node multinode-120000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node multinode-120000 event: Registered Node multinode-120000 in Controller
	  Normal  Starting                 117s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 117s)    kubelet          Node multinode-120000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 117s)    kubelet          Node multinode-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 117s)    kubelet          Node multinode-120000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                   node-controller  Node multinode-120000 event: Registered Node multinode-120000 in Controller
	
	
	Name:               multinode-120000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-120000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab84f9bc76071a77c857a14f5c66dccc01002b05
	                    minikube.k8s.io/name=multinode-120000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_16T05_58_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 12:58:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-120000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 12:59:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 16 Aug 2024 12:58:58 +0000   Fri, 16 Aug 2024 13:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 16 Aug 2024 12:58:58 +0000   Fri, 16 Aug 2024 13:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 16 Aug 2024 12:58:58 +0000   Fri, 16 Aug 2024 13:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 16 Aug 2024 12:58:58 +0000   Fri, 16 Aug 2024 13:01:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-120000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 02406c06d38e43bd9f97628d6897e8df
	  System UUID:                ee854de0-0000-0000-ac93-052eb9962a60
	  Boot ID:                    c458eee6-6b2f-4c3b-b271-3f633ff0c733
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4fhqq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kindnet-gxqsm              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m1s
	  kube-system                 kube-proxy-x88cp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  Starting                 3m30s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m1s (x2 over 7m2s)    kubelet          Node multinode-120000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s (x2 over 7m2s)    kubelet          Node multinode-120000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m1s (x2 over 7m2s)    kubelet          Node multinode-120000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m39s                  kubelet          Node multinode-120000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m33s (x2 over 3m33s)  kubelet          Node multinode-120000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m33s (x2 over 3m33s)  kubelet          Node multinode-120000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m33s (x2 over 3m33s)  kubelet          Node multinode-120000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m18s                  kubelet          Node multinode-120000-m02 status is now: NodeReady
	  Normal  RegisteredNode           109s                   node-controller  Node multinode-120000-m02 event: Registered Node multinode-120000-m02 in Controller
	  Normal  NodeNotReady             69s                    node-controller  Node multinode-120000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.703136] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.515143] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.243313] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.631848] systemd-fstab-generator[468]: Ignoring "noauto" option for root device
	[  +0.095777] systemd-fstab-generator[480]: Ignoring "noauto" option for root device
	[  +1.873310] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.248923] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.057473] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.046591] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.111811] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +2.473797] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.109481] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.100186] systemd-fstab-generator[1156]: Ignoring "noauto" option for root device
	[  +0.128026] systemd-fstab-generator[1171]: Ignoring "noauto" option for root device
	[  +0.382407] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +1.410786] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +1.540022] kauditd_printk_skb: 234 callbacks suppressed
	[  +6.206134] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.202400] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[ +27.095234] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 13:01] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [20467f4274ea] <==
	{"level":"info","ts":"2024-08-16T13:00:21.758365Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-08-16T13:00:21.759261Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-16T13:00:21.759747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 switched to configuration voters=(3612125861281190545)"}
	{"level":"info","ts":"2024-08-16T13:00:21.759800Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","added-peer-id":"3220d9553daad291","added-peer-peer-urls":["https://192.169.0.14:2380"]}
	{"level":"info","ts":"2024-08-16T13:00:21.760133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9b2185e42760b005","local-member-id":"3220d9553daad291","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:00:21.760210Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T13:00:21.762087Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T13:00:21.762137Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T13:00:21.762150Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T13:00:22.723955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T13:00:22.724105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T13:00:22.724257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgPreVoteResp from 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-08-16T13:00:22.724304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became candidate at term 4"}
	{"level":"info","ts":"2024-08-16T13:00:22.724432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgVoteResp from 3220d9553daad291 at term 4"}
	{"level":"info","ts":"2024-08-16T13:00:22.724476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 4"}
	{"level":"info","ts":"2024-08-16T13:00:22.724558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 4"}
	{"level":"info","ts":"2024-08-16T13:00:22.725867Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-120000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T13:00:22.726029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:00:22.726312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T13:00:22.727206Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:00:22.727835Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	{"level":"info","ts":"2024-08-16T13:00:22.728616Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T13:00:22.728920Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T13:00:22.729022Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T13:00:22.730411Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [26d48b6ad6fb] <==
	{"level":"info","ts":"2024-08-16T12:58:01.898336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T12:58:01.898378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgPreVoteResp from 3220d9553daad291 at term 2"}
	{"level":"info","ts":"2024-08-16T12:58:01.898394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T12:58:01.898402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 received MsgVoteResp from 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-08-16T12:58:01.898431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3220d9553daad291 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T12:58:01.898441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3220d9553daad291 elected leader 3220d9553daad291 at term 3"}
	{"level":"info","ts":"2024-08-16T12:58:01.899683Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3220d9553daad291","local-member-attributes":"{Name:multinode-120000 ClientURLs:[https://192.169.0.14:2379]}","request-path":"/0/members/3220d9553daad291/attributes","cluster-id":"9b2185e42760b005","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T12:58:01.899781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:58:01.900739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T12:58:01.900816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T12:58:01.900296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T12:58:01.902240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T12:58:01.902273Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T12:58:01.903007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.14:2379"}
	{"level":"info","ts":"2024-08-16T12:58:01.903315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T12:59:52.404944Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T12:59:52.405012Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-120000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	{"level":"warn","ts":"2024-08-16T12:59:52.405091Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:59:52.405162Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:59:52.420532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T12:59:52.420578Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T12:59:52.422411Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3220d9553daad291","current-leader-member-id":"3220d9553daad291"}
	{"level":"info","ts":"2024-08-16T12:59:52.424102Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-08-16T12:59:52.424161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.14:2380"}
	{"level":"info","ts":"2024-08-16T12:59:52.424188Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-120000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.14:2380"],"advertise-client-urls":["https://192.169.0.14:2379"]}
	
	
	==> kernel <==
	 13:02:16 up 2 min,  0 users,  load average: 0.69, 0.27, 0.10
	Linux multinode-120000 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5ae7eceff676] <==
	I0816 12:59:04.725346       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0816 12:59:04.725354       1 main.go:322] Node multinode-120000-m03 has CIDR [10.244.4.0/24] 
	I0816 12:59:14.734466       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 12:59:14.734507       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:59:14.734755       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0816 12:59:14.734785       1 main.go:322] Node multinode-120000-m03 has CIDR [10.244.4.0/24] 
	I0816 12:59:14.734895       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 12:59:14.734926       1 main.go:299] handling current node
	I0816 12:59:24.725523       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 12:59:24.725633       1 main.go:299] handling current node
	I0816 12:59:24.725652       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 12:59:24.725662       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:59:24.725853       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0816 12:59:24.725913       1 main.go:322] Node multinode-120000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:59:24.725990       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.16 Flags: [] Table: 0} 
	I0816 12:59:34.724914       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0816 12:59:34.725055       1 main.go:322] Node multinode-120000-m03 has CIDR [10.244.2.0/24] 
	I0816 12:59:34.725272       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 12:59:34.725348       1 main.go:299] handling current node
	I0816 12:59:34.725392       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 12:59:34.725418       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 12:59:44.727635       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 12:59:44.727714       1 main.go:299] handling current node
	I0816 12:59:44.727732       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 12:59:44.727741       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [9c30d88088d1] <==
	I0816 13:01:15.724006       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:01:25.723026       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:01:25.723231       1 main.go:299] handling current node
	I0816 13:01:25.723346       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:01:25.723377       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:01:35.726880       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:01:35.727013       1 main.go:299] handling current node
	I0816 13:01:35.727054       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:01:35.727104       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:01:45.722614       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:01:45.722747       1 main.go:299] handling current node
	I0816 13:01:45.723057       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:01:45.723072       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:01:55.721751       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:01:55.721920       1 main.go:299] handling current node
	I0816 13:01:55.721960       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:01:55.721987       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:02:05.729368       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:02:05.729507       1 main.go:299] handling current node
	I0816 13:02:05.729529       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:02:05.729540       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	I0816 13:02:15.729523       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0816 13:02:15.729647       1 main.go:299] handling current node
	I0816 13:02:15.729686       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0816 13:02:15.729714       1 main.go:322] Node multinode-120000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a92131c1b00a] <==
	W0816 12:59:53.416067       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416113       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416136       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416215       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416261       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416308       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416721       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416804       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416872       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416916       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416934       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417018       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417101       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417151       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417197       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417292       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417367       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417417       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417500       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417508       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417328       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417340       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.416919       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417295       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0816 12:59:53.417663       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e336fd330e3a] <==
	I0816 13:00:23.809741       1 policy_source.go:224] refreshing policies
	I0816 13:00:23.819561       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 13:00:23.847497       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 13:00:23.848269       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0816 13:00:23.849205       1 shared_informer.go:320] Caches are synced for configmaps
	I0816 13:00:23.849503       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 13:00:23.849759       1 aggregator.go:171] initial CRD sync complete...
	I0816 13:00:23.849913       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 13:00:23.850083       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 13:00:23.850197       1 cache.go:39] Caches are synced for autoregister controller
	I0816 13:00:23.853351       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0816 13:00:23.896571       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0816 13:00:23.897209       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0816 13:00:23.897265       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0816 13:00:23.897453       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0816 13:00:23.901399       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 13:00:24.753448       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0816 13:00:24.960209       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.14]
	I0816 13:00:24.961418       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 13:00:24.964365       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 13:00:25.559574       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 13:00:25.699547       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 13:00:25.706850       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 13:00:25.747272       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 13:00:25.767155       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [307b9e0a93df] <==
	I0816 13:00:27.745020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="580.732541ms"
	I0816 13:00:27.745309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="580.761815ms"
	I0816 13:00:27.745802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.341µs"
	I0816 13:00:27.745919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="113.975µs"
	I0816 13:00:27.758033       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:00:27.758236       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 13:00:27.768120       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 13:00:44.233603       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-120000-m02"
	I0816 13:00:44.233855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000"
	I0816 13:00:44.240224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000"
	I0816 13:00:47.301792       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000"
	I0816 13:00:57.146065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.417632ms"
	I0816 13:00:57.146135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.466µs"
	I0816 13:00:57.156714       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="94.101µs"
	I0816 13:00:57.173069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="6.336838ms"
	I0816 13:00:57.173174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="29.779µs"
	I0816 13:01:07.160524       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lww85"
	I0816 13:01:07.171435       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lww85"
	I0816 13:01:07.171473       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vskxm"
	I0816 13:01:07.181850       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vskxm"
	I0816 13:01:07.309267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m02"
	I0816 13:01:07.319733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m02"
	I0816 13:01:07.323045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.92444ms"
	I0816 13:01:07.323580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="490.974µs"
	I0816 13:01:12.405377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m02"
	
	
	==> kube-controller-manager [a5500cc4ab0e] <==
	I0816 12:59:18.937999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.238µs"
	I0816 12:59:19.587872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="3.339549ms"
	I0816 12:59:19.588520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="455.469µs"
	I0816 12:59:22.068119       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-120000-m02"
	I0816 12:59:22.068472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:23.206950       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-120000-m03\" does not exist"
	I0816 12:59:23.207172       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-120000-m02"
	I0816 12:59:23.219292       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-120000-m03" podCIDRs=["10.244.2.0/24"]
	I0816 12:59:23.219471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:23.219559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:23.425191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:23.720525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:25.067396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.946µs"
	I0816 12:59:25.104318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.705µs"
	I0816 12:59:25.108738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.457µs"
	I0816 12:59:25.110683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.057µs"
	I0816 12:59:26.511689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:33.468614       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:38.152581       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-120000-m02"
	I0816 12:59:38.152656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:38.159086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:40.815637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:40.823125       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	I0816 12:59:41.121467       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-120000-m02"
	I0816 12:59:41.121475       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-120000-m03"
	
	
	==> kube-proxy [5b1dc2fc2f96] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 13:00:24.948660       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 13:00:24.967739       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0816 13:00:24.967803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 13:00:24.996860       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 13:00:24.996897       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 13:00:24.996914       1 server_linux.go:169] "Using iptables Proxier"
	I0816 13:00:24.998908       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 13:00:24.999188       1 server.go:483] "Version info" version="v1.31.0"
	I0816 13:00:24.999220       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:00:25.001287       1 config.go:197] "Starting service config controller"
	I0816 13:00:25.001473       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 13:00:25.001721       1 config.go:104] "Starting endpoint slice config controller"
	I0816 13:00:25.001746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 13:00:25.002758       1 config.go:326] "Starting node config controller"
	I0816 13:00:25.002783       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 13:00:25.102757       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 13:00:25.102845       1 shared_informer.go:320] Caches are synced for service config
	I0816 13:00:25.103161       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [701ae173eac2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 12:58:04.063223       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 12:58:04.073277       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.14"]
	E0816 12:58:04.073334       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 12:58:04.107330       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 12:58:04.107378       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 12:58:04.107396       1 server_linux.go:169] "Using iptables Proxier"
	I0816 12:58:04.109620       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 12:58:04.110001       1 server.go:483] "Version info" version="v1.31.0"
	I0816 12:58:04.110030       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:58:04.112641       1 config.go:197] "Starting service config controller"
	I0816 12:58:04.112796       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 12:58:04.112836       1 config.go:104] "Starting endpoint slice config controller"
	I0816 12:58:04.112841       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 12:58:04.113534       1 config.go:326] "Starting node config controller"
	I0816 12:58:04.113562       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 12:58:04.213971       1 shared_informer.go:320] Caches are synced for node config
	I0816 12:58:04.214096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 12:58:04.214038       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [157135701f7d] <==
	I0816 12:58:00.526236       1 serving.go:386] Generated self-signed cert in-memory
	W0816 12:58:02.794479       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 12:58:02.794602       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 12:58:02.794655       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 12:58:02.794741       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 12:58:02.835600       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 12:58:02.835693       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 12:58:02.837833       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 12:58:02.838213       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 12:58:02.838391       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:58:02.838634       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 12:58:02.938999       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 12:59:52.427590       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0816 12:59:52.427664       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0816 12:59:52.431754       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b3df4b553f6b] <==
	I0816 13:00:22.040389       1 serving.go:386] Generated self-signed cert in-memory
	W0816 13:00:23.782784       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 13:00:23.782819       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 13:00:23.782826       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 13:00:23.782831       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 13:00:23.811352       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 13:00:23.814042       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 13:00:23.817742       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 13:00:23.817848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 13:00:23.819868       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 13:00:23.817964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 13:00:23.920972       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 13:00:35 multinode-120000 kubelet[1436]: E0816 13:00:35.024667    1436 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Aug 16 13:00:35 multinode-120000 kubelet[1436]: E0816 13:00:35.992629    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6f6b679f8f-qvlc2" podUID="08cca513-a37c-44f0-b558-30530308cb3f"
	Aug 16 13:00:35 multinode-120000 kubelet[1436]: E0816 13:00:35.993270    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-fqvsk" podUID="fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc"
	Aug 16 13:00:37 multinode-120000 kubelet[1436]: E0816 13:00:37.993094    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6f6b679f8f-qvlc2" podUID="08cca513-a37c-44f0-b558-30530308cb3f"
	Aug 16 13:00:37 multinode-120000 kubelet[1436]: E0816 13:00:37.996637    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-fqvsk" podUID="fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc"
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.595396    1436 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.595557    1436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08cca513-a37c-44f0-b558-30530308cb3f-config-volume podName:08cca513-a37c-44f0-b558-30530308cb3f nodeName:}" failed. No retries permitted until 2024-08-16 13:00:55.595538193 +0000 UTC m=+35.760441938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/08cca513-a37c-44f0-b558-30530308cb3f-config-volume") pod "coredns-6f6b679f8f-qvlc2" (UID: "08cca513-a37c-44f0-b558-30530308cb3f") : object "kube-system"/"coredns" not registered
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.697013    1436 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.697197    1436 projected.go:194] Error preparing data for projected volume kube-api-access-l2xpj for pod default/busybox-7dff88458-fqvsk: object "default"/"kube-root-ca.crt" not registered
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.697459    1436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc-kube-api-access-l2xpj podName:fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc nodeName:}" failed. No retries permitted until 2024-08-16 13:00:55.697437983 +0000 UTC m=+35.862341732 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2xpj" (UniqueName: "kubernetes.io/projected/fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc-kube-api-access-l2xpj") pod "busybox-7dff88458-fqvsk" (UID: "fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc") : object "default"/"kube-root-ca.crt" not registered
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.991896    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-fqvsk" podUID="fe1b2f05-b01a-4ae5-b034-1c1dc7581fbc"
	Aug 16 13:00:39 multinode-120000 kubelet[1436]: E0816 13:00:39.992425    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6f6b679f8f-qvlc2" podUID="08cca513-a37c-44f0-b558-30530308cb3f"
	Aug 16 13:00:54 multinode-120000 kubelet[1436]: I0816 13:00:54.868088    1436 scope.go:117] "RemoveContainer" containerID="f09b2d4d9690f0664189348c45c4a9d931e1a00bcae4d31f7649239be18ed5aa"
	Aug 16 13:00:54 multinode-120000 kubelet[1436]: I0816 13:00:54.868275    1436 scope.go:117] "RemoveContainer" containerID="fa1e3e4dc76d3592af93051afe252137aa27d3335100ded84abc222312840f44"
	Aug 16 13:00:54 multinode-120000 kubelet[1436]: E0816 13:00:54.868354    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(03776551-6bfa-4cdb-a48f-b32c38e3f900)\"" pod="kube-system/storage-provisioner" podUID="03776551-6bfa-4cdb-a48f-b32c38e3f900"
	Aug 16 13:00:56 multinode-120000 kubelet[1436]: I0816 13:00:56.095831    1436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fc5f40264fbdb8aac55d7aecffcc8b0cf37184db8f6137bdbe10af0200d31d0"
	Aug 16 13:00:56 multinode-120000 kubelet[1436]: I0816 13:00:56.106935    1436 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90b50a3782ce04ff9b1c93330629f358c57f9e9921fa3d083d219b439b58b12"
	Aug 16 13:01:08 multinode-120000 kubelet[1436]: I0816 13:01:08.994178    1436 scope.go:117] "RemoveContainer" containerID="fa1e3e4dc76d3592af93051afe252137aa27d3335100ded84abc222312840f44"
	Aug 16 13:01:08 multinode-120000 kubelet[1436]: E0816 13:01:08.994886    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(03776551-6bfa-4cdb-a48f-b32c38e3f900)\"" pod="kube-system/storage-provisioner" podUID="03776551-6bfa-4cdb-a48f-b32c38e3f900"
	Aug 16 13:01:20 multinode-120000 kubelet[1436]: E0816 13:01:20.018061    1436 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 13:01:20 multinode-120000 kubelet[1436]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 13:01:20 multinode-120000 kubelet[1436]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 13:01:20 multinode-120000 kubelet[1436]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 13:01:20 multinode-120000 kubelet[1436]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 13:01:22 multinode-120000 kubelet[1436]: I0816 13:01:22.994020    1436 scope.go:117] "RemoveContainer" containerID="fa1e3e4dc76d3592af93051afe252137aa27d3335100ded84abc222312840f44"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-120000 -n multinode-120000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-120000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (138.11s)

                                                
                                    
x
+
TestScheduledStopUnix (141.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-226000 --memory=2048 --driver=hyperkit 
E0816 06:07:52.915621    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-226000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.626933573s)

                                                
                                                
-- stdout --
	* [scheduled-stop-226000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-226000" primary control-plane node in "scheduled-stop-226000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-226000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c8:b8:a:9f:20
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-226000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:f2:60:d4:1a:48
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:f2:60:d4:1a:48
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-226000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-226000" primary control-plane node in "scheduled-stop-226000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-226000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c8:b8:a:9f:20
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-226000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:f2:60:d4:1a:48
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:f2:60:d4:1a:48
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-16 06:08:25.303763 -0700 PDT m=+2931.985194823
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-226000 -n scheduled-stop-226000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-226000 -n scheduled-stop-226000: exit status 7 (79.250228ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:08:25.381092    5119 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:08:25.381113    5119 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-226000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-226000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-226000
E0816 06:08:27.963496    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-226000: (5.254789385s)
--- FAIL: TestScheduledStopUnix (141.96s)

                                                
                                    
x
+
TestPause/serial/Start (139.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-894000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-894000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m18.994596168s)

                                                
                                                
-- stdout --
	* [pause-894000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-894000" primary control-plane node in "pause-894000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-894000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:f0:66:fc:9c:24
	* Failed to start hyperkit VM. Running "minikube delete -p pause-894000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:1c:f1:0:20:b9
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:1c:f1:0:20:b9
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-894000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-894000 -n pause-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-894000 -n pause-894000: exit status 7 (80.24034ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 06:49:43.476240    7137 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0816 06:49:43.476261    7137 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-894000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (139.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (76.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : exit status 90 (1m16.868436316s)

                                                
                                                
-- stdout --
	* [false-199000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "false-199000" primary control-plane node in "false-199000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:53:38.890219    7929 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:53:38.890516    7929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:53:38.890522    7929 out.go:358] Setting ErrFile to fd 2...
	I0816 06:53:38.890525    7929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:53:38.890706    7929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:53:38.892334    7929 out.go:352] Setting JSON to false
	I0816 06:53:38.917038    7929 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6196,"bootTime":1723810222,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 06:53:38.917133    7929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 06:53:38.972206    7929 out.go:177] * [false-199000] minikube v1.33.1 on Darwin 14.6.1
	I0816 06:53:39.001525    7929 notify.go:220] Checking for updates...
	I0816 06:53:39.027417    7929 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 06:53:39.089447    7929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 06:53:39.127347    7929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 06:53:39.147386    7929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 06:53:39.170588    7929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:53:39.191686    7929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 06:53:39.214226    7929 config.go:182] Loaded profile config "custom-flannel-199000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:53:39.214426    7929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 06:53:39.245585    7929 out.go:177] * Using the hyperkit driver based on user configuration
	I0816 06:53:39.287590    7929 start.go:297] selected driver: hyperkit
	I0816 06:53:39.287621    7929 start.go:901] validating driver "hyperkit" against <nil>
	I0816 06:53:39.287640    7929 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 06:53:39.292071    7929 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:53:39.292196    7929 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 06:53:39.300840    7929 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 06:53:39.304802    7929 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:53:39.304821    7929 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 06:53:39.304853    7929 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 06:53:39.305053    7929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 06:53:39.305119    7929 cni.go:84] Creating CNI manager for "false"
	I0816 06:53:39.305196    7929 start.go:340] cluster config:
	{Name:false-199000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-199000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 06:53:39.305297    7929 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 06:53:39.347515    7929 out.go:177] * Starting "false-199000" primary control-plane node in "false-199000" cluster
	I0816 06:53:39.371624    7929 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 06:53:39.371676    7929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 06:53:39.371695    7929 cache.go:56] Caching tarball of preloaded images
	I0816 06:53:39.371846    7929 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 06:53:39.371859    7929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 06:53:39.371961    7929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/false-199000/config.json ...
	I0816 06:53:39.371985    7929 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/false-199000/config.json: {Name:mkc0fb49343f97e06192fa91603e0480f10aeb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 06:53:39.372366    7929 start.go:360] acquireMachinesLock for false-199000: {Name:mk01a3788d838f3f01163e41175de7e0c2d2cd1c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 06:53:39.372449    7929 start.go:364] duration metric: took 69.366µs to acquireMachinesLock for "false-199000"
	I0816 06:53:39.372481    7929 start.go:93] Provisioning new machine with config: &{Name:false-199000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.0 ClusterName:false-199000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 06:53:39.372540    7929 start.go:125] createHost starting for "" (driver="hyperkit")
	I0816 06:53:39.380226    7929 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 06:53:39.380349    7929 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:53:39.380394    7929 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:53:39.389155    7929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55810
	I0816 06:53:39.389571    7929 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:53:39.389979    7929 main.go:141] libmachine: Using API Version  1
	I0816 06:53:39.389990    7929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:53:39.390261    7929 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:53:39.390397    7929 main.go:141] libmachine: (false-199000) Calling .GetMachineName
	I0816 06:53:39.390481    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:39.390600    7929 start.go:159] libmachine.API.Create for "false-199000" (driver="hyperkit")
	I0816 06:53:39.390656    7929 client.go:168] LocalClient.Create starting
	I0816 06:53:39.390693    7929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem
	I0816 06:53:39.390747    7929 main.go:141] libmachine: Decoding PEM data...
	I0816 06:53:39.390765    7929 main.go:141] libmachine: Parsing certificate...
	I0816 06:53:39.390829    7929 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem
	I0816 06:53:39.390868    7929 main.go:141] libmachine: Decoding PEM data...
	I0816 06:53:39.390880    7929 main.go:141] libmachine: Parsing certificate...
	I0816 06:53:39.390894    7929 main.go:141] libmachine: Running pre-create checks...
	I0816 06:53:39.390906    7929 main.go:141] libmachine: (false-199000) Calling .PreCreateCheck
	I0816 06:53:39.391004    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:39.391164    7929 main.go:141] libmachine: (false-199000) Calling .GetConfigRaw
	I0816 06:53:39.418064    7929 main.go:141] libmachine: Creating machine...
	I0816 06:53:39.418095    7929 main.go:141] libmachine: (false-199000) Calling .Create
	I0816 06:53:39.418354    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:39.418657    7929 main.go:141] libmachine: (false-199000) DBG | I0816 06:53:39.418334    7937 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 06:53:39.418805    7929 main.go:141] libmachine: (false-199000) Downloading /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0816 06:53:39.605833    7929 main.go:141] libmachine: (false-199000) DBG | I0816 06:53:39.605775    7937 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/id_rsa...
	I0816 06:53:39.743927    7929 main.go:141] libmachine: (false-199000) DBG | I0816 06:53:39.743870    7937 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/false-199000.rawdisk...
	I0816 06:53:39.743944    7929 main.go:141] libmachine: (false-199000) DBG | Writing magic tar header
	I0816 06:53:39.744011    7929 main.go:141] libmachine: (false-199000) DBG | Writing SSH key tar header
	I0816 06:53:39.744297    7929 main.go:141] libmachine: (false-199000) DBG | I0816 06:53:39.744265    7937 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000 ...
	I0816 06:53:40.125101    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:40.125117    7929 main.go:141] libmachine: (false-199000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/hyperkit.pid
	I0816 06:53:40.125152    7929 main.go:141] libmachine: (false-199000) DBG | Using UUID 255dd12a-fffc-47da-82f4-ed0bb762ad59
	I0816 06:53:40.152317    7929 main.go:141] libmachine: (false-199000) DBG | Generated MAC 5e:26:2d:a9:f0:e5
	I0816 06:53:40.152336    7929 main.go:141] libmachine: (false-199000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-199000
	I0816 06:53:40.152372    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"255dd12a-fffc-47da-82f4-ed0bb762ad59", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:53:40.152396    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"255dd12a-fffc-47da-82f4-ed0bb762ad59", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000122330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0816 06:53:40.152447    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "255dd12a-fffc-47da-82f4-ed0bb762ad59", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/false-199000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-
199000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-199000"}
	I0816 06:53:40.152480    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 255dd12a-fffc-47da-82f4-ed0bb762ad59 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/false-199000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/tty,log=/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/bzimage,/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 cons
ole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-199000"
	I0816 06:53:40.152498    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0816 06:53:40.155461    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 DEBUG: hyperkit: Pid is 7938
	I0816 06:53:40.155939    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 0
	I0816 06:53:40.155955    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:40.156082    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:40.157139    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:40.157268    7929 main.go:141] libmachine: (false-199000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0816 06:53:40.157285    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e:17:ed:21:51:a2 ID:1,e:17:ed:21:51:a2 Lease:0x66c0ab43}
	I0816 06:53:40.157325    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:7e:cc:64:6e:ed:f0 ID:1,7e:cc:64:6e:ed:f0 Lease:0x66c0ab04}
	I0816 06:53:40.157338    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ea:fb:aa:35:3a:9c ID:1,ea:fb:aa:35:3a:9c Lease:0x66c0aae2}
	I0816 06:53:40.157369    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:b6:7f:d5:9f:37:66 ID:1,b6:7f:d5:9f:37:66 Lease:0x66bf5957}
	I0816 06:53:40.157380    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:22:ad:29:d9:da ID:1,5e:22:ad:29:d9:da Lease:0x66c0aa93}
	I0816 06:53:40.157393    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:62:da:b8:c:1d:82 ID:1,62:da:b8:c:1d:82 Lease:0x66bf5927}
	I0816 06:53:40.157400    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:82:7a:1:e7:d2:69 ID:1,82:7a:1:e7:d2:69 Lease:0x66c0aa33}
	I0816 06:53:40.157423    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:a2:50:7e:4a:12:3d ID:1,a2:50:7e:4a:12:3d Lease:0x66c0a74a}
	I0816 06:53:40.157436    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b2:9f:43:ca:e5:c5 ID:1,b2:9f:43:ca:e5:c5 Lease:0x66c0a44f}
	I0816 06:53:40.157445    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:53:40.157451    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:53:40.157464    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:53:40.157477    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:53:40.157487    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:53:40.157493    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:53:40.157500    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:53:40.157513    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:53:40.157528    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:53:40.157547    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:53:40.157562    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:53:40.157572    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:53:40.157580    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:53:40.157587    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:53:40.157595    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:53:40.157609    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:53:40.157617    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:53:40.157626    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:53:40.163419    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0816 06:53:40.172636    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0816 06:53:40.173459    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:53:40.173483    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:53:40.173498    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:53:40.173510    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:53:40.585416    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0816 06:53:40.585433    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0816 06:53:40.700540    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0816 06:53:40.700564    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0816 06:53:40.700611    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0816 06:53:40.700626    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0816 06:53:40.701302    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0816 06:53:40.701316    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0816 06:53:42.159117    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 1
	I0816 06:53:42.159134    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:42.159252    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:42.160067    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:42.160138    7929 main.go:141] libmachine: (false-199000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0816 06:53:42.160150    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e:17:ed:21:51:a2 ID:1,e:17:ed:21:51:a2 Lease:0x66c0ab43}
	I0816 06:53:42.160161    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:7e:cc:64:6e:ed:f0 ID:1,7e:cc:64:6e:ed:f0 Lease:0x66c0ab04}
	I0816 06:53:42.160169    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ea:fb:aa:35:3a:9c ID:1,ea:fb:aa:35:3a:9c Lease:0x66c0aae2}
	I0816 06:53:42.160190    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:b6:7f:d5:9f:37:66 ID:1,b6:7f:d5:9f:37:66 Lease:0x66bf5957}
	I0816 06:53:42.160216    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:22:ad:29:d9:da ID:1,5e:22:ad:29:d9:da Lease:0x66c0aa93}
	I0816 06:53:42.160227    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:62:da:b8:c:1d:82 ID:1,62:da:b8:c:1d:82 Lease:0x66bf5927}
	I0816 06:53:42.160234    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:82:7a:1:e7:d2:69 ID:1,82:7a:1:e7:d2:69 Lease:0x66c0aa33}
	I0816 06:53:42.160243    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:a2:50:7e:4a:12:3d ID:1,a2:50:7e:4a:12:3d Lease:0x66c0a74a}
	I0816 06:53:42.160251    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b2:9f:43:ca:e5:c5 ID:1,b2:9f:43:ca:e5:c5 Lease:0x66c0a44f}
	I0816 06:53:42.160258    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:53:42.160264    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:53:42.160274    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:53:42.160282    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:53:42.160290    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:53:42.160298    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:53:42.160313    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:53:42.160321    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:53:42.160338    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:53:42.160351    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:53:42.160363    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:53:42.160372    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:53:42.160397    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:53:42.160405    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:53:42.160412    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:53:42.160420    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:53:42.160427    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:53:42.160435    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:53:44.161244    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 2
	I0816 06:53:44.161259    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:44.161387    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:44.162267    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:44.162336    7929 main.go:141] libmachine: (false-199000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0816 06:53:44.162349    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e:17:ed:21:51:a2 ID:1,e:17:ed:21:51:a2 Lease:0x66c0ab43}
	I0816 06:53:44.162363    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:7e:cc:64:6e:ed:f0 ID:1,7e:cc:64:6e:ed:f0 Lease:0x66c0ab04}
	I0816 06:53:44.162371    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ea:fb:aa:35:3a:9c ID:1,ea:fb:aa:35:3a:9c Lease:0x66c0aae2}
	I0816 06:53:44.162378    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:b6:7f:d5:9f:37:66 ID:1,b6:7f:d5:9f:37:66 Lease:0x66bf5957}
	I0816 06:53:44.162386    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:22:ad:29:d9:da ID:1,5e:22:ad:29:d9:da Lease:0x66c0aa93}
	I0816 06:53:44.162393    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:62:da:b8:c:1d:82 ID:1,62:da:b8:c:1d:82 Lease:0x66bf5927}
	I0816 06:53:44.162399    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:82:7a:1:e7:d2:69 ID:1,82:7a:1:e7:d2:69 Lease:0x66c0aa33}
	I0816 06:53:44.162407    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:a2:50:7e:4a:12:3d ID:1,a2:50:7e:4a:12:3d Lease:0x66c0a74a}
	I0816 06:53:44.162414    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b2:9f:43:ca:e5:c5 ID:1,b2:9f:43:ca:e5:c5 Lease:0x66c0a44f}
	I0816 06:53:44.162432    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:53:44.162441    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:53:44.162458    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:53:44.162469    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:53:44.162478    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:53:44.162486    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:53:44.162492    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:53:44.162508    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:53:44.162523    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:53:44.162547    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:53:44.162560    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:53:44.162568    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:53:44.162577    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:53:44.162584    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:53:44.162592    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:53:44.162601    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:53:44.162609    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:53:44.162618    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:53:46.162607    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 3
	I0816 06:53:46.162626    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:46.162699    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:46.163498    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:46.163560    7929 main.go:141] libmachine: (false-199000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0816 06:53:46.163569    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e:17:ed:21:51:a2 ID:1,e:17:ed:21:51:a2 Lease:0x66c0ab43}
	I0816 06:53:46.163583    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:7e:cc:64:6e:ed:f0 ID:1,7e:cc:64:6e:ed:f0 Lease:0x66c0ab04}
	I0816 06:53:46.163591    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ea:fb:aa:35:3a:9c ID:1,ea:fb:aa:35:3a:9c Lease:0x66c0aae2}
	I0816 06:53:46.163598    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:b6:7f:d5:9f:37:66 ID:1,b6:7f:d5:9f:37:66 Lease:0x66bf5957}
	I0816 06:53:46.163607    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:22:ad:29:d9:da ID:1,5e:22:ad:29:d9:da Lease:0x66c0aa93}
	I0816 06:53:46.163614    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:62:da:b8:c:1d:82 ID:1,62:da:b8:c:1d:82 Lease:0x66bf5927}
	I0816 06:53:46.163622    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:82:7a:1:e7:d2:69 ID:1,82:7a:1:e7:d2:69 Lease:0x66c0aa33}
	I0816 06:53:46.163631    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:a2:50:7e:4a:12:3d ID:1,a2:50:7e:4a:12:3d Lease:0x66c0a74a}
	I0816 06:53:46.163643    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b2:9f:43:ca:e5:c5 ID:1,b2:9f:43:ca:e5:c5 Lease:0x66c0a44f}
	I0816 06:53:46.163663    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:53:46.163671    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:53:46.163678    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:53:46.163686    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:53:46.163692    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:53:46.163707    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:53:46.163719    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:53:46.163738    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:53:46.163747    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:53:46.163755    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:53:46.163761    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:53:46.163767    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:53:46.163773    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:53:46.163780    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:53:46.163790    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:53:46.163798    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:53:46.163805    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:53:46.163812    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:53:46.295019    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0816 06:53:46.295060    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0816 06:53:46.295068    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0816 06:53:46.318981    7929 main.go:141] libmachine: (false-199000) DBG | 2024/08/16 06:53:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0816 06:53:48.164069    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 4
	I0816 06:53:48.164085    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:48.164158    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:48.164990    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:48.165072    7929 main.go:141] libmachine: (false-199000) DBG | Found 27 entries in /var/db/dhcpd_leases!
	I0816 06:53:48.165084    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e:17:ed:21:51:a2 ID:1,e:17:ed:21:51:a2 Lease:0x66c0ab43}
	I0816 06:53:48.165096    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:7e:cc:64:6e:ed:f0 ID:1,7e:cc:64:6e:ed:f0 Lease:0x66c0ab04}
	I0816 06:53:48.165106    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ea:fb:aa:35:3a:9c ID:1,ea:fb:aa:35:3a:9c Lease:0x66c0aae2}
	I0816 06:53:48.165118    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:b6:7f:d5:9f:37:66 ID:1,b6:7f:d5:9f:37:66 Lease:0x66bf5957}
	I0816 06:53:48.165130    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:5e:22:ad:29:d9:da ID:1,5e:22:ad:29:d9:da Lease:0x66c0aa93}
	I0816 06:53:48.165160    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:62:da:b8:c:1d:82 ID:1,62:da:b8:c:1d:82 Lease:0x66bf5927}
	I0816 06:53:48.165203    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:82:7a:1:e7:d2:69 ID:1,82:7a:1:e7:d2:69 Lease:0x66c0aa33}
	I0816 06:53:48.165211    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:a2:50:7e:4a:12:3d ID:1,a2:50:7e:4a:12:3d Lease:0x66c0a74a}
	I0816 06:53:48.165218    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b2:9f:43:ca:e5:c5 ID:1,b2:9f:43:ca:e5:c5 Lease:0x66c0a44f}
	I0816 06:53:48.165229    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:e2:ee:3d:77:e6:db ID:1,e2:ee:3d:77:e6:db Lease:0x66c0a0dc}
	I0816 06:53:48.165238    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4e:1f:38:c8:2:e ID:1,4e:1f:38:c8:2:e Lease:0x66c0a01c}
	I0816 06:53:48.165246    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:91:de:94:4f:e9 ID:1,9e:91:de:94:4f:e9 Lease:0x66bf4e06}
	I0816 06:53:48.165265    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:34:28:e3:0:46 ID:1,f2:34:28:e3:0:46 Lease:0x66bf4d3e}
	I0816 06:53:48.165279    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:fa:8b:6e:be:7a:d1 ID:1,fa:8b:6e:be:7a:d1 Lease:0x66bf4e13}
	I0816 06:53:48.165289    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fa:4b:15:6b:d9:84 ID:1,fa:4b:15:6b:d9:84 Lease:0x66c09ed9}
	I0816 06:53:48.165297    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:42:6c:6f:94:b:fe ID:1,42:6c:6f:94:b:fe Lease:0x66bf4b4c}
	I0816 06:53:48.165307    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:79:3f:9:6a:a9 ID:1,ce:79:3f:9:6a:a9 Lease:0x66c09c83}
	I0816 06:53:48.165316    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:8f:4:ff:6e:c2 ID:1,1e:8f:4:ff:6e:c2 Lease:0x66c09c2b}
	I0816 06:53:48.165327    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:ff:c0:16:c7:d7 ID:1,96:ff:c0:16:c7:d7 Lease:0x66c09bfc}
	I0816 06:53:48.165335    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:72:86:6a:19:80:d9 ID:1,72:86:6a:19:80:d9 Lease:0x66c09b96}
	I0816 06:53:48.165343    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:da:75:16:53:b7 ID:1,f2:da:75:16:53:b7 Lease:0x66c09b45}
	I0816 06:53:48.165350    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:22:7d:14:f5:63 ID:1,ca:22:7d:14:f5:63 Lease:0x66bf493d}
	I0816 06:53:48.165357    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:3a:16:de:25:18:f9 ID:1,3a:16:de:25:18:f9 Lease:0x66c09b0c}
	I0816 06:53:48.165365    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:36:31:25:a5:a2:ed ID:1,36:31:25:a5:a2:ed Lease:0x66c09ae2}
	I0816 06:53:48.165374    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:16:9:7e:1e:cc:e8 ID:1,16:9:7e:1e:cc:e8 Lease:0x66c097cb}
	I0816 06:53:48.165384    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:5a:c8:55:33:da:3d ID:1,5a:c8:55:33:da:3d Lease:0x66c09703}
	I0816 06:53:48.165395    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:fe:66:e2:df:45:85 ID:1,fe:66:e2:df:45:85 Lease:0x66c0957b}
	I0816 06:53:50.165494    7929 main.go:141] libmachine: (false-199000) DBG | Attempt 5
	I0816 06:53:50.165507    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:50.165585    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:50.166393    7929 main.go:141] libmachine: (false-199000) DBG | Searching for 5e:26:2d:a9:f0:e5 in /var/db/dhcpd_leases ...
	I0816 06:53:50.166438    7929 main.go:141] libmachine: (false-199000) DBG | Found 28 entries in /var/db/dhcpd_leases!
	I0816 06:53:50.166452    7929 main.go:141] libmachine: (false-199000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5e:26:2d:a9:f0:e5 ID:1,5e:26:2d:a9:f0:e5 Lease:0x66c0ab6d}
	I0816 06:53:50.166459    7929 main.go:141] libmachine: (false-199000) DBG | Found match: 5e:26:2d:a9:f0:e5
	I0816 06:53:50.166464    7929 main.go:141] libmachine: (false-199000) DBG | IP: 192.169.0.29
	I0816 06:53:50.166515    7929 main.go:141] libmachine: (false-199000) Calling .GetConfigRaw
	I0816 06:53:50.167084    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:50.167185    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:50.167279    7929 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 06:53:50.167292    7929 main.go:141] libmachine: (false-199000) Calling .GetState
	I0816 06:53:50.167381    7929 main.go:141] libmachine: (false-199000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:53:50.167442    7929 main.go:141] libmachine: (false-199000) DBG | hyperkit pid from json: 7938
	I0816 06:53:50.168314    7929 main.go:141] libmachine: Detecting operating system of created instance...
	I0816 06:53:50.168332    7929 main.go:141] libmachine: Waiting for SSH to be available...
	I0816 06:53:50.168339    7929 main.go:141] libmachine: Getting to WaitForSSH function...
	I0816 06:53:50.168343    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:50.168510    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:50.168640    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:50.168779    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:50.168907    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:50.169088    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:50.169338    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:50.169347    7929 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0816 06:53:51.233775    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:53:51.233798    7929 main.go:141] libmachine: Detecting the provisioner...
	I0816 06:53:51.233805    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.233935    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.234048    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.234144    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.234217    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.234368    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.234530    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.234538    7929 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 06:53:51.298287    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0816 06:53:51.298341    7929 main.go:141] libmachine: found compatible host: buildroot
	I0816 06:53:51.298348    7929 main.go:141] libmachine: Provisioning with buildroot...
	I0816 06:53:51.298354    7929 main.go:141] libmachine: (false-199000) Calling .GetMachineName
	I0816 06:53:51.298492    7929 buildroot.go:166] provisioning hostname "false-199000"
	I0816 06:53:51.298504    7929 main.go:141] libmachine: (false-199000) Calling .GetMachineName
	I0816 06:53:51.298608    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.298686    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.298795    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.298888    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.298978    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.299118    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.299268    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.299277    7929 main.go:141] libmachine: About to run SSH command:
	sudo hostname false-199000 && echo "false-199000" | sudo tee /etc/hostname
	I0816 06:53:51.374091    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: false-199000
	
	I0816 06:53:51.374117    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.374266    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.374372    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.374460    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.374562    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.374698    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.374847    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.374859    7929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-199000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-199000/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-199000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 06:53:51.446223    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 06:53:51.446246    7929 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1009/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1009/.minikube}
	I0816 06:53:51.446266    7929 buildroot.go:174] setting up certificates
	I0816 06:53:51.446279    7929 provision.go:84] configureAuth start
	I0816 06:53:51.446286    7929 main.go:141] libmachine: (false-199000) Calling .GetMachineName
	I0816 06:53:51.446421    7929 main.go:141] libmachine: (false-199000) Calling .GetIP
	I0816 06:53:51.446516    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.446619    7929 provision.go:143] copyHostCerts
	I0816 06:53:51.446706    7929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem, removing ...
	I0816 06:53:51.446716    7929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem
	I0816 06:53:51.446855    7929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/cert.pem (1123 bytes)
	I0816 06:53:51.447094    7929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem, removing ...
	I0816 06:53:51.447101    7929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem
	I0816 06:53:51.447177    7929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/key.pem (1679 bytes)
	I0816 06:53:51.447344    7929 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem, removing ...
	I0816 06:53:51.447350    7929 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem
	I0816 06:53:51.447425    7929 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1009/.minikube/ca.pem (1082 bytes)
	I0816 06:53:51.447568    7929 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca-key.pem org=jenkins.false-199000 san=[127.0.0.1 192.169.0.29 false-199000 localhost minikube]
	I0816 06:53:51.653598    7929 provision.go:177] copyRemoteCerts
	I0816 06:53:51.653664    7929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 06:53:51.653681    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.653820    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.653917    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.654018    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.654108    7929 sshutil.go:53] new ssh client: &{IP:192.169.0.29 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/id_rsa Username:docker}
	I0816 06:53:51.692016    7929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 06:53:51.712181    7929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0816 06:53:51.732457    7929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 06:53:51.752771    7929 provision.go:87] duration metric: took 306.475329ms to configureAuth
	I0816 06:53:51.752786    7929 buildroot.go:189] setting minikube options for container-runtime
	I0816 06:53:51.752932    7929 config.go:182] Loaded profile config "false-199000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:53:51.752947    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:51.753093    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.753198    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.753280    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.753382    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.753467    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.753592    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.753719    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.753727    7929 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 06:53:51.818068    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 06:53:51.818094    7929 buildroot.go:70] root file system type: tmpfs
	I0816 06:53:51.818159    7929 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 06:53:51.818173    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.818307    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.818412    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.818510    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.818600    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.818738    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.818878    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.818928    7929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 06:53:51.893683    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 06:53:51.893705    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:51.893835    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:51.893929    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.894014    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:51.894137    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:51.894274    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:51.894440    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:51.894452    7929 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 06:53:53.505729    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 06:53:53.505750    7929 main.go:141] libmachine: Checking connection to Docker...
	I0816 06:53:53.505757    7929 main.go:141] libmachine: (false-199000) Calling .GetURL
	I0816 06:53:53.505893    7929 main.go:141] libmachine: Docker is up and running!
	I0816 06:53:53.505901    7929 main.go:141] libmachine: Reticulating splines...
	I0816 06:53:53.505907    7929 client.go:171] duration metric: took 14.115121773s to LocalClient.Create
	I0816 06:53:53.505919    7929 start.go:167] duration metric: took 14.115197515s to libmachine.API.Create "false-199000"
	I0816 06:53:53.505931    7929 start.go:293] postStartSetup for "false-199000" (driver="hyperkit")
	I0816 06:53:53.505938    7929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 06:53:53.505948    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:53.506105    7929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 06:53:53.506119    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:53.506211    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:53.506307    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:53.506409    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:53.506495    7929 sshutil.go:53] new ssh client: &{IP:192.169.0.29 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/id_rsa Username:docker}
	I0816 06:53:53.544878    7929 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 06:53:53.548051    7929 info.go:137] Remote host: Buildroot 2023.02.9
	I0816 06:53:53.548069    7929 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/addons for local assets ...
	I0816 06:53:53.548186    7929 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1009/.minikube/files for local assets ...
	I0816 06:53:53.548380    7929 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem -> 15542.pem in /etc/ssl/certs
	I0816 06:53:53.548585    7929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 06:53:53.555830    7929 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/ssl/certs/15542.pem --> /etc/ssl/certs/15542.pem (1708 bytes)
	I0816 06:53:53.575653    7929 start.go:296] duration metric: took 69.71413ms for postStartSetup
	I0816 06:53:53.575683    7929 main.go:141] libmachine: (false-199000) Calling .GetConfigRaw
	I0816 06:53:53.576305    7929 main.go:141] libmachine: (false-199000) Calling .GetIP
	I0816 06:53:53.576451    7929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/false-199000/config.json ...
	I0816 06:53:53.576758    7929 start.go:128] duration metric: took 14.204081483s to createHost
	I0816 06:53:53.576771    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:53.576857    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:53.576935    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:53.577017    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:53.577096    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:53.577219    7929 main.go:141] libmachine: Using SSH client type: native
	I0816 06:53:53.577350    7929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb613ea0] 0xb616c00 <nil>  [] 0s} 192.169.0.29 22 <nil> <nil>}
	I0816 06:53:53.577357    7929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 06:53:53.641246    7929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723816433.744944389
	
	I0816 06:53:53.641263    7929 fix.go:216] guest clock: 1723816433.744944389
	I0816 06:53:53.641268    7929 fix.go:229] Guest: 2024-08-16 06:53:53.744944389 -0700 PDT Remote: 2024-08-16 06:53:53.576765 -0700 PDT m=+14.723468100 (delta=168.179389ms)
	I0816 06:53:53.641283    7929 fix.go:200] guest clock delta is within tolerance: 168.179389ms
	I0816 06:53:53.641287    7929 start.go:83] releasing machines lock for "false-199000", held for 14.268705594s
	I0816 06:53:53.641305    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:53.641442    7929 main.go:141] libmachine: (false-199000) Calling .GetIP
	I0816 06:53:53.641534    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:53.641877    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:53.641977    7929 main.go:141] libmachine: (false-199000) Calling .DriverName
	I0816 06:53:53.642062    7929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 06:53:53.642092    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:53.642125    7929 ssh_runner.go:195] Run: cat /version.json
	I0816 06:53:53.642137    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHHostname
	I0816 06:53:53.642205    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:53.642224    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHPort
	I0816 06:53:53.642312    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:53.642331    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHKeyPath
	I0816 06:53:53.642408    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:53.642438    7929 main.go:141] libmachine: (false-199000) Calling .GetSSHUsername
	I0816 06:53:53.642503    7929 sshutil.go:53] new ssh client: &{IP:192.169.0.29 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/id_rsa Username:docker}
	I0816 06:53:53.642522    7929 sshutil.go:53] new ssh client: &{IP:192.169.0.29 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/false-199000/id_rsa Username:docker}
	I0816 06:53:53.720856    7929 ssh_runner.go:195] Run: systemctl --version
	I0816 06:53:53.726408    7929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 06:53:53.731197    7929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 06:53:53.731269    7929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 06:53:53.739852    7929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 06:53:53.754714    7929 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 06:53:53.754734    7929 start.go:495] detecting cgroup driver to use...
	I0816 06:53:53.754846    7929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:53:53.778640    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 06:53:53.789061    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 06:53:53.798411    7929 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 06:53:53.798465    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 06:53:53.807542    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:53:53.816840    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 06:53:53.826557    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 06:53:53.836017    7929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 06:53:53.845692    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 06:53:53.856112    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 06:53:53.865810    7929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 06:53:53.874914    7929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 06:53:53.883579    7929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 06:53:53.892693    7929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:53:54.002978    7929 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 06:53:54.021099    7929 start.go:495] detecting cgroup driver to use...
	I0816 06:53:54.021179    7929 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 06:53:54.045624    7929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:53:54.056384    7929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 06:53:54.074506    7929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 06:53:54.085951    7929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:53:54.096281    7929 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 06:53:54.117566    7929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 06:53:54.128013    7929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 06:53:54.143017    7929 ssh_runner.go:195] Run: which cri-dockerd
	I0816 06:53:54.146054    7929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 06:53:54.153193    7929 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0816 06:53:54.167005    7929 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 06:53:54.269795    7929 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 06:53:54.377761    7929 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 06:53:54.377834    7929 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 06:53:54.392530    7929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 06:53:54.497441    7929 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 06:54:55.532853    7929 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.034864274s)
	I0816 06:54:55.532917    7929 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0816 06:54:55.574442    7929 out.go:201] 
	W0816 06:54:55.598726    7929 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 13:53:52 false-199000 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.336464537Z" level=info msg="Starting up"
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.337475982Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.338009143Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=538
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.352689012Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367893884Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367924111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367968521Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367979332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368049488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368084641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368268720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368303067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368316214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368323679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368382068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368528512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370087403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370126112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370267932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370301700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370372538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370435420Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377216882Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377277304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377296202Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377312262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377325351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377395860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377581624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377680782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377716905Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377729877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377739245Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377747563Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377755691Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377764344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377780510Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377791948Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377801374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377808984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377822080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377831520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377840259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377851103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377859245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377868493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377876156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377883940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377891927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377906813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377917677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377925753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377933338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377942922Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377965249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377981628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377991833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378021248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378056077Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378066412Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378074528Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378081815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378090101Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378099683Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378231518Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378308866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378364618Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378400329Z" level=info msg="containerd successfully booted in 0.026463s"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.384643051Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.391730174Z" level=info msg="Loading containers: start."
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.478856792Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.567411028Z" level=info msg="Loading containers: done."
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.577854982Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.578042851Z" level=info msg="Daemon has completed initialization"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.608626275Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.608815492Z" level=info msg="API listen on [::]:2376"
	Aug 16 13:53:53 false-199000 systemd[1]: Started Docker Application Container Engine.
	Aug 16 13:53:54 false-199000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.612098648Z" level=info msg="Processing signal 'terminated'"
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613140045Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613352653Z" level=info msg="Daemon shutdown complete"
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613480265Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613520942Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 13:53:55 false-199000 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 13:53:55 false-199000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 13:53:55 false-199000 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:53:55 false-199000 dockerd[921]: time="2024-08-16T13:53:55.661127228Z" level=info msg="Starting up"
	Aug 16 13:54:55 false-199000 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 13:54:55 false-199000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 13:54:55 false-199000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 13:54:55 false-199000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 16 13:53:52 false-199000 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.336464537Z" level=info msg="Starting up"
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.337475982Z" level=info msg="containerd not running, starting managed containerd"
	Aug 16 13:53:52 false-199000 dockerd[528]: time="2024-08-16T13:53:52.338009143Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=538
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.352689012Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367893884Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367924111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367968521Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.367979332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368049488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368084641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368268720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368303067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368316214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368323679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368382068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.368528512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370087403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370126112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370267932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370301700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370372538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.370435420Z" level=info msg="metadata content store policy set" policy=shared
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377216882Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377277304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377296202Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377312262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377325351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377395860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377581624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377680782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377716905Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377729877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377739245Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377747563Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377755691Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377764344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377780510Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377791948Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377801374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377808984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377822080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377831520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377840259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377851103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377859245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377868493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377876156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377883940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377891927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377906813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377917677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377925753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377933338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377942922Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377965249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377981628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.377991833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378021248Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378056077Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378066412Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378074528Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378081815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378090101Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378099683Z" level=info msg="NRI interface is disabled by configuration."
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378231518Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378308866Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378364618Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 16 13:53:52 false-199000 dockerd[538]: time="2024-08-16T13:53:52.378400329Z" level=info msg="containerd successfully booted in 0.026463s"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.384643051Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.391730174Z" level=info msg="Loading containers: start."
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.478856792Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.567411028Z" level=info msg="Loading containers: done."
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.577854982Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.578042851Z" level=info msg="Daemon has completed initialization"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.608626275Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 16 13:53:53 false-199000 dockerd[528]: time="2024-08-16T13:53:53.608815492Z" level=info msg="API listen on [::]:2376"
	Aug 16 13:53:53 false-199000 systemd[1]: Started Docker Application Container Engine.
	Aug 16 13:53:54 false-199000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.612098648Z" level=info msg="Processing signal 'terminated'"
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613140045Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613352653Z" level=info msg="Daemon shutdown complete"
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613480265Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 16 13:53:54 false-199000 dockerd[528]: time="2024-08-16T13:53:54.613520942Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 16 13:53:55 false-199000 systemd[1]: docker.service: Deactivated successfully.
	Aug 16 13:53:55 false-199000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 16 13:53:55 false-199000 systemd[1]: Starting Docker Application Container Engine...
	Aug 16 13:53:55 false-199000 dockerd[921]: time="2024-08-16T13:53:55.661127228Z" level=info msg="Starting up"
	Aug 16 13:54:55 false-199000 dockerd[921]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 16 13:54:55 false-199000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 16 13:54:55 false-199000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 16 13:54:55 false-199000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0816 06:54:55.598829    7929 out.go:270] * 
	* 
	W0816 06:54:55.599935    7929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 06:54:55.641393    7929 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/false/Start (76.89s)

                                                
                                    

Test pass (289/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.31.0/json-events 9.71
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.3
18 TestDownloadOnly/v1.31.0/DeleteAll 0.24
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.94
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 207.71
29 TestAddons/serial/Volcano 39.71
31 TestAddons/serial/GCPAuth/Namespaces 0.1
33 TestAddons/parallel/Registry 15.82
34 TestAddons/parallel/Ingress 19.52
35 TestAddons/parallel/InspektorGadget 11.74
36 TestAddons/parallel/MetricsServer 5.51
37 TestAddons/parallel/HelmTiller 10.06
39 TestAddons/parallel/CSI 57.09
40 TestAddons/parallel/Headlamp 19.31
41 TestAddons/parallel/CloudSpanner 5.45
42 TestAddons/parallel/LocalPath 52.51
43 TestAddons/parallel/NvidiaDevicePlugin 6.42
44 TestAddons/parallel/Yakd 11.62
45 TestAddons/StoppedEnableDisable 5.92
53 TestHyperKitDriverInstallOrUpdate 9.18
56 TestErrorSpam/setup 38.45
57 TestErrorSpam/start 1.73
58 TestErrorSpam/status 0.5
59 TestErrorSpam/pause 1.37
60 TestErrorSpam/unpause 1.44
61 TestErrorSpam/stop 155.91
64 TestFunctional/serial/CopySyncFile 0.01
65 TestFunctional/serial/StartWithProxy 53.74
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 64.55
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.18
79 TestFunctional/serial/MinikubeKubectlCmd 1.19
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.56
81 TestFunctional/serial/ExtraConfig 39.77
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.8
84 TestFunctional/serial/LogsFileCmd 2.72
85 TestFunctional/serial/InvalidService 4.15
87 TestFunctional/parallel/ConfigCmd 0.58
88 TestFunctional/parallel/DashboardCmd 9.66
89 TestFunctional/parallel/DryRun 1.14
90 TestFunctional/parallel/InternationalLanguage 0.48
91 TestFunctional/parallel/StatusCmd 0.53
95 TestFunctional/parallel/ServiceCmdConnect 12.38
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 25.5
99 TestFunctional/parallel/SSHCmd 0.31
100 TestFunctional/parallel/CpCmd 0.97
101 TestFunctional/parallel/MySQL 25.52
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.06
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/Version/short 0.12
113 TestFunctional/parallel/Version/components 0.42
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.6
119 TestFunctional/parallel/ImageCommands/Setup 1.84
120 TestFunctional/parallel/DockerEnv/bash 0.6
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.88
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.61
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
132 TestFunctional/parallel/ProfileCmd/profile_list 0.27
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.15
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
145 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
146 TestFunctional/parallel/ServiceCmd/List 0.78
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.77
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
149 TestFunctional/parallel/ServiceCmd/Format 0.44
150 TestFunctional/parallel/ServiceCmd/URL 0.44
151 TestFunctional/parallel/MountCmd/any-port 5.9
152 TestFunctional/parallel/MountCmd/specific-port 1.51
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 203.99
161 TestMultiControlPlane/serial/DeployApp 4.84
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 52.92
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.36
166 TestMultiControlPlane/serial/CopyFile 9.12
167 TestMultiControlPlane/serial/StopSecondaryNode 8.71
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
169 TestMultiControlPlane/serial/RestartSecondaryNode 37.48
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 192.21
172 TestMultiControlPlane/serial/DeleteSecondaryNode 7.38
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.25
174 TestMultiControlPlane/serial/StopCluster 24.97
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.26
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.34
181 TestImageBuild/serial/Setup 37.96
182 TestImageBuild/serial/NormalBuild 1.88
183 TestImageBuild/serial/BuildWithBuildArg 0.77
184 TestImageBuild/serial/BuildWithDockerIgnore 0.62
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
189 TestJSONOutput/start/Command 77.48
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.46
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.45
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.33
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.57
217 TestMainNoArgs 0.1
218 TestMinikubeProfile 90.7
224 TestMultiNode/serial/FreshStart2Nodes 105.83
225 TestMultiNode/serial/DeployApp2Nodes 4.52
226 TestMultiNode/serial/PingHostFrom2Pods 0.89
227 TestMultiNode/serial/AddNode 45.64
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.19
230 TestMultiNode/serial/CopyFile 5.27
231 TestMultiNode/serial/StopNode 2.84
232 TestMultiNode/serial/StartAfterStop 41.69
233 TestMultiNode/serial/RestartKeepsNodes 139.86
234 TestMultiNode/serial/DeleteNode 3.29
235 TestMultiNode/serial/StopMultiNode 16.8
237 TestMultiNode/serial/ValidateNameConflict 44.67
241 TestPreload 172.78
244 TestSkaffold 110.11
247 TestRunningBinaryUpgrade 91.65
249 TestKubernetesUpgrade 1369.47
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.15
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.76
264 TestStoppedBinaryUpgrade/Setup 1.54
265 TestStoppedBinaryUpgrade/Upgrade 119.2
268 TestStoppedBinaryUpgrade/MinikubeLogs 2.53
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
278 TestNoKubernetes/serial/StartWithK8s 74.71
279 TestNetworkPlugins/group/auto/Start 95.98
280 TestNoKubernetes/serial/StartWithStopK8s 8.81
281 TestNoKubernetes/serial/Start 22.2
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
283 TestNoKubernetes/serial/ProfileList 0.48
284 TestNoKubernetes/serial/Stop 2.38
285 TestNoKubernetes/serial/StartNoArgs 19.32
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.12
287 TestNetworkPlugins/group/kindnet/Start 61.86
288 TestNetworkPlugins/group/auto/KubeletFlags 0.15
289 TestNetworkPlugins/group/auto/NetCatPod 11.15
290 TestNetworkPlugins/group/auto/DNS 0.12
291 TestNetworkPlugins/group/auto/Localhost 0.1
292 TestNetworkPlugins/group/auto/HairPin 0.1
293 TestNetworkPlugins/group/calico/Start 70.03
294 TestNetworkPlugins/group/kindnet/ControllerPod 6
295 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
296 TestNetworkPlugins/group/kindnet/NetCatPod 11.14
297 TestNetworkPlugins/group/kindnet/DNS 0.13
298 TestNetworkPlugins/group/kindnet/Localhost 0.1
299 TestNetworkPlugins/group/kindnet/HairPin 0.1
300 TestNetworkPlugins/group/custom-flannel/Start 51.61
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/KubeletFlags 0.16
303 TestNetworkPlugins/group/calico/NetCatPod 11.14
304 TestNetworkPlugins/group/calico/DNS 0.13
305 TestNetworkPlugins/group/calico/Localhost 0.11
306 TestNetworkPlugins/group/calico/HairPin 0.1
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.15
310 TestNetworkPlugins/group/custom-flannel/DNS 0.12
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
313 TestNetworkPlugins/group/enable-default-cni/Start 78.97
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.15
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.18
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
319 TestNetworkPlugins/group/flannel/Start 63.59
320 TestNetworkPlugins/group/bridge/Start 67.25
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
323 TestNetworkPlugins/group/bridge/NetCatPod 10.14
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
325 TestNetworkPlugins/group/flannel/NetCatPod 10.14
326 TestNetworkPlugins/group/bridge/DNS 25.99
327 TestNetworkPlugins/group/flannel/DNS 0.13
328 TestNetworkPlugins/group/flannel/Localhost 0.1
329 TestNetworkPlugins/group/flannel/HairPin 0.1
330 TestNetworkPlugins/group/kubenet/Start 50.15
331 TestNetworkPlugins/group/bridge/Localhost 0.1
332 TestNetworkPlugins/group/bridge/HairPin 0.1
334 TestStartStop/group/old-k8s-version/serial/FirstStart 144.05
335 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
336 TestNetworkPlugins/group/kubenet/NetCatPod 11.15
337 TestNetworkPlugins/group/kubenet/DNS 0.12
338 TestNetworkPlugins/group/kubenet/Localhost 0.1
339 TestNetworkPlugins/group/kubenet/HairPin 0.1
341 TestStartStop/group/no-preload/serial/FirstStart 55.11
342 TestStartStop/group/no-preload/serial/DeployApp 9.22
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.74
344 TestStartStop/group/no-preload/serial/Stop 8.4
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
346 TestStartStop/group/no-preload/serial/SecondStart 311.96
347 TestStartStop/group/old-k8s-version/serial/DeployApp 9.34
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
349 TestStartStop/group/old-k8s-version/serial/Stop 8.41
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
351 TestStartStop/group/old-k8s-version/serial/SecondStart 404.03
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
355 TestStartStop/group/no-preload/serial/Pause 2
357 TestStartStop/group/embed-certs/serial/FirstStart 52.99
358 TestStartStop/group/embed-certs/serial/DeployApp 8.21
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
360 TestStartStop/group/embed-certs/serial/Stop 8.41
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
362 TestStartStop/group/embed-certs/serial/SecondStart 293.6
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
366 TestStartStop/group/old-k8s-version/serial/Pause 1.87
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.71
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.22
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.77
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.43
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 292.29
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
377 TestStartStop/group/embed-certs/serial/Pause 1.94
379 TestStartStop/group/newest-cni/serial/FirstStart 41.35
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
382 TestStartStop/group/newest-cni/serial/Stop 8.42
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
384 TestStartStop/group/newest-cni/serial/SecondStart 29.57
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
388 TestStartStop/group/newest-cni/serial/Pause 1.88
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.91
x
+
TestDownloadOnly/v1.20.0/json-events (14.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-808000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-808000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (14.101115193s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-808000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-808000: exit status 85 (290.024296ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-808000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |          |
	|         | -p download-only-808000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:19:33
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:19:33.430768    1556 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:33.430960    1556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:33.430965    1556 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:33.430969    1556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:33.431139    1556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	W0816 05:19:33.431249    1556 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-1009/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-1009/.minikube/config/config.json: no such file or directory
	I0816 05:19:33.433119    1556 out.go:352] Setting JSON to true
	I0816 05:19:33.456973    1556 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":551,"bootTime":1723810222,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:19:33.457066    1556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:33.479473    1556 out.go:97] [download-only-808000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:19:33.479708    1556 notify.go:220] Checking for updates...
	W0816 05:19:33.479707    1556 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 05:19:33.501503    1556 out.go:169] MINIKUBE_LOCATION=19423
	I0816 05:19:33.522497    1556 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:19:33.544312    1556 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:19:33.567435    1556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:33.588557    1556 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	W0816 05:19:33.630472    1556 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 05:19:33.630948    1556 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:33.679643    1556 out.go:97] Using the hyperkit driver based on user configuration
	I0816 05:19:33.679711    1556 start.go:297] selected driver: hyperkit
	I0816 05:19:33.679728    1556 start.go:901] validating driver "hyperkit" against <nil>
	I0816 05:19:33.679957    1556 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:33.680335    1556 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 05:19:34.080977    1556 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 05:19:34.086155    1556 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:19:34.086177    1556 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 05:19:34.086206    1556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:34.090830    1556 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0816 05:19:34.091388    1556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:19:34.091419    1556 cni.go:84] Creating CNI manager for ""
	I0816 05:19:34.091433    1556 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 05:19:34.091499    1556 start.go:340] cluster config:
	{Name:download-only-808000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:34.091728    1556 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:34.113258    1556 out.go:97] Downloading VM boot image ...
	I0816 05:19:34.113379    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0816 05:19:40.648328    1556 out.go:97] Starting "download-only-808000" primary control-plane node in "download-only-808000" cluster
	I0816 05:19:40.648370    1556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:40.708589    1556 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0816 05:19:40.708608    1556 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:40.709119    1556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 05:19:40.730037    1556 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 05:19:40.730113    1556 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 05:19:40.821800    1556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-808000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-808000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-808000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-133000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-133000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (9.708169059s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-133000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-133000: exit status 85 (299.002433ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-808000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-808000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| delete  | -p download-only-808000        | download-only-808000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT | 16 Aug 24 05:19 PDT |
	| start   | -o=json --download-only        | download-only-133000 | jenkins | v1.33.1 | 16 Aug 24 05:19 PDT |                     |
	|         | -p download-only-133000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 05:19:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 05:19:48.303607    1582 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:19:48.303821    1582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:48.303826    1582 out.go:358] Setting ErrFile to fd 2...
	I0816 05:19:48.303830    1582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:19:48.304003    1582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:19:48.305533    1582 out.go:352] Setting JSON to true
	I0816 05:19:48.331361    1582 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":566,"bootTime":1723810222,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:19:48.331458    1582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:19:48.352784    1582 out.go:97] [download-only-133000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:19:48.352944    1582 notify.go:220] Checking for updates...
	I0816 05:19:48.374857    1582 out.go:169] MINIKUBE_LOCATION=19423
	I0816 05:19:48.397741    1582 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:19:48.419584    1582 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:19:48.440651    1582 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:19:48.461775    1582 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	W0816 05:19:48.503641    1582 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 05:19:48.504074    1582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:19:48.534774    1582 out.go:97] Using the hyperkit driver based on user configuration
	I0816 05:19:48.534830    1582 start.go:297] selected driver: hyperkit
	I0816 05:19:48.534847    1582 start.go:901] validating driver "hyperkit" against <nil>
	I0816 05:19:48.535065    1582 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:48.535289    1582 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1009/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0816 05:19:48.545245    1582 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0816 05:19:48.549516    1582 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:19:48.549538    1582 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0816 05:19:48.549566    1582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 05:19:48.552227    1582 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0816 05:19:48.552389    1582 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 05:19:48.552458    1582 cni.go:84] Creating CNI manager for ""
	I0816 05:19:48.552473    1582 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 05:19:48.552483    1582 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 05:19:48.552548    1582 start.go:340] cluster config:
	{Name:download-only-133000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-133000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:19:48.552634    1582 iso.go:125] acquiring lock: {Name:mke4ec41b46f0b885a95a5bd835f2a0445e654fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 05:19:48.573757    1582 out.go:97] Starting "download-only-133000" primary control-plane node in "download-only-133000" cluster
	I0816 05:19:48.573794    1582 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:19:48.634140    1582 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0816 05:19:48.634191    1582 cache.go:56] Caching tarball of preloaded images
	I0816 05:19:48.634635    1582 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 05:19:48.656619    1582 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 05:19:48.656646    1582 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0816 05:19:48.748004    1582 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19423-1009/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-133000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-133000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-133000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-437000 --alsologtostderr --binary-mirror http://127.0.0.1:49622 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-437000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-437000
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-040000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-040000: exit status 85 (188.422195ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-040000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-040000: exit status 85 (209.099549ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (207.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m27.707915017s)
--- PASS: TestAddons/Setup (207.71s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 12.050713ms
addons_test.go:905: volcano-admission stabilized in 12.083208ms
addons_test.go:913: volcano-controller stabilized in 12.111158ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-qgr76" [7b9d4119-ddfd-491b-ad6d-5d54891cd511] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002308204s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-qndq5" [05f72c10-b325-4898-878e-144088a91aaf] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0024379s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2nmgk" [fcb60011-b9e6-45b8-8935-7339e7143481] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003059908s
addons_test.go:932: (dbg) Run:  kubectl --context addons-040000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-040000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-040000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [691f2964-ab7c-482e-838b-531bd7c38748] Pending
helpers_test.go:344: "test-job-nginx-0" [691f2964-ab7c-482e-838b-531bd7c38748] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [691f2964-ab7c-482e-838b-531bd7c38748] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003378491s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable volcano --alsologtostderr -v=1: (10.411622988s)
--- PASS: TestAddons/serial/Volcano (39.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-040000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-040000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.82603ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-7svhw" [35366c51-e787-4a1f-9da4-2d4279e65d26] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002100525s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-72hqh" [dd3c08c9-623f-4dec-b440-e89d1d5e408b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002907562s
addons_test.go:342: (dbg) Run:  kubectl --context addons-040000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-040000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-040000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.135390261s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-040000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5fdf57d3-5bda-4e78-909d-4c8e439a1d0f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5fdf57d3-5bda-4e78-909d-4c8e439a1d0f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002556772s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-040000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable ingress-dns --alsologtostderr -v=1: (1.104385464s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable ingress --alsologtostderr -v=1: (7.479752305s)
--- PASS: TestAddons/parallel/Ingress (19.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hv7rr" [2bbcc22c-70ed-44d5-b0d6-41f2d071f3cc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00301571s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-040000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-040000: (5.73779203s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.608245ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-hwdv7" [ae0905c0-0675-41af-97e9-a4410ed5ba0c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002784556s
addons_test.go:417: (dbg) Run:  kubectl --context addons-040000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.06s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.736797ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-8msc6" [05cd0bec-2e0f-4b81-8beb-067ae6a6973c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003809994s
addons_test.go:475: (dbg) Run:  kubectl --context addons-040000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-040000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.634608513s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.283787ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [53bebc84-653e-4fc8-bff4-c0c108b708aa] Pending
2024/08/16 05:24:40 [DEBUG] GET http://192.169.0.2:5000
helpers_test.go:344: "task-pv-pod" [53bebc84-653e-4fc8-bff4-c0c108b708aa] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [53bebc84-653e-4fc8-bff4-c0c108b708aa] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002489721s
addons_test.go:590: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-040000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-040000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-040000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [86d91618-07f1-44c2-bd6b-536b4e6cd736] Pending
helpers_test.go:344: "task-pv-pod-restore" [86d91618-07f1-44c2-bd6b-536b4e6cd736] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [86d91618-07f1-44c2-bd6b-536b4e6cd736] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003115634s
addons_test.go:632: (dbg) Run:  kubectl --context addons-040000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-040000 delete pod task-pv-pod-restore: (1.130529085s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-040000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-040000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.447829372s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-040000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-fw6k9" [b81d76ce-82f1-41a4-b71e-7f6b8bb706d7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-fw6k9" [b81d76ce-82f1-41a4-b71e-7f6b8bb706d7] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.0022528s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable headlamp --alsologtostderr -v=1: (5.476248169s)
--- PASS: TestAddons/parallel/Headlamp (19.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-tzjzs" [946d8614-f994-4768-bde5-42a91dabb189] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003308281s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-040000
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-040000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-040000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [da41206f-30df-4d2b-8e3d-6c1faad84211] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [da41206f-30df-4d2b-8e3d-6c1faad84211] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [da41206f-30df-4d2b-8e3d-6c1faad84211] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003070395s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-040000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 ssh "cat /opt/local-path-provisioner/pvc-8d24c800-eaaa-483e-9f79-66acea46378c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-040000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-040000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.873418183s)
--- PASS: TestAddons/parallel/LocalPath (52.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v4lvw" [f36d824c-c413-4e04-831f-76022bdb96bb] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.015450276s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-040000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qp5xw" [4e7861cc-e8e7-4eb5-b552-8480ccd79bb1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002653376s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-040000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-040000 addons disable yakd --alsologtostderr -v=1: (5.617764192s)
--- PASS: TestAddons/parallel/Yakd (11.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-040000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-040000: (5.379814425s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-040000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-040000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-040000
--- PASS: TestAddons/StoppedEnableDisable (5.92s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.18s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.18s)

                                                
                                    
x
+
TestErrorSpam/setup (38.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-916000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-916000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 --driver=hyperkit : (38.450906123s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (38.45s)

                                                
                                    
x
+
TestErrorSpam/start (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 start --dry-run
--- PASS: TestErrorSpam/start (1.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (155.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop: (5.444250662s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop
E0816 05:28:27.987893    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:27.997758    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.009493    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.033142    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.075094    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.158667    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.322298    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:28.645865    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:29.289544    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:30.571910    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:33.135577    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop: (1m15.236222135s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop
E0816 05:28:38.257291    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:28:48.499703    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:29:08.982734    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:29:49.943716    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-916000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-916000 stop: (1m15.22959234s)
--- PASS: TestErrorSpam/stop (155.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19423-1009/.minikube/files/etc/test/nested/copy/1554/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (53.743612238s)
--- PASS: TestFunctional/serial/StartWithProxy (53.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (64.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --alsologtostderr -v=8
E0816 05:31:11.865404    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --alsologtostderr -v=8: (1m4.549858157s)
functional_test.go:663: soft start took 1m4.550403774s for "functional-525000" cluster.
--- PASS: TestFunctional/serial/SoftStart (64.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-525000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.1: (1.279531622s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:3.3: (1.075139685s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2020819859/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache add minikube-local-cache-test:functional-525000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache delete minikube-local-cache-test:functional-525000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-525000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (143.140246ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 kubectl -- --context functional-525000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 kubectl -- --context functional-525000 get pods: (1.193019373s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-525000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-525000 get pods: (1.559396992s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-525000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.772030515s)
functional_test.go:761: restart took 39.772152679s for "functional-525000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-525000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 logs: (2.80017303s)
--- PASS: TestFunctional/serial/LogsCmd (2.80s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2523051117/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2523051117/001/logs.txt: (2.719349037s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-525000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-525000: exit status 115 (272.776474ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31108 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-525000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 config get cpus: exit status 14 (54.84589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 config get cpus: exit status 14 (84.877965ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-525000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-525000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2964: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (659.287698ms)

                                                
                                                
-- stdout --
	* [functional-525000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:33:44.270647    2942 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:33:44.271211    2942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:44.271220    2942 out.go:358] Setting ErrFile to fd 2...
	I0816 05:33:44.271227    2942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:44.271607    2942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:33:44.273425    2942 out.go:352] Setting JSON to false
	I0816 05:33:44.296618    2942 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1402,"bootTime":1723810222,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:33:44.296721    2942 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:33:44.321309    2942 out.go:177] * [functional-525000] minikube v1.33.1 on Darwin 14.6.1
	I0816 05:33:44.361993    2942 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:33:44.361998    2942 notify.go:220] Checking for updates...
	I0816 05:33:44.419865    2942 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:33:44.461956    2942 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:33:44.520008    2942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:33:44.577762    2942 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 05:33:44.634931    2942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:33:44.656327    2942 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:33:44.656673    2942 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:33:44.656719    2942 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:33:44.665814    2942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50781
	I0816 05:33:44.666194    2942 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:33:44.666627    2942 main.go:141] libmachine: Using API Version  1
	I0816 05:33:44.666637    2942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:33:44.666836    2942 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:33:44.666938    2942 main.go:141] libmachine: (functional-525000) Calling .DriverName
	I0816 05:33:44.667130    2942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:33:44.667392    2942 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:33:44.667417    2942 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:33:44.676003    2942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50783
	I0816 05:33:44.676367    2942 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:33:44.676699    2942 main.go:141] libmachine: Using API Version  1
	I0816 05:33:44.676713    2942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:33:44.676912    2942 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:33:44.677015    2942 main.go:141] libmachine: (functional-525000) Calling .DriverName
	I0816 05:33:44.708140    2942 out.go:177] * Using the hyperkit driver based on existing profile
	I0816 05:33:44.750005    2942 start.go:297] selected driver: hyperkit
	I0816 05:33:44.750034    2942 start.go:901] validating driver "hyperkit" against &{Name:functional-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-525000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:33:44.750229    2942 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:33:44.774954    2942 out.go:201] 
	W0816 05:33:44.811856    2942 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 05:33:44.853984    2942 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-525000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (484.125397ms)

                                                
                                                
-- stdout --
	* [functional-525000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:33:40.952921    2894 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:33:40.953071    2894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:40.953076    2894 out.go:358] Setting ErrFile to fd 2...
	I0816 05:33:40.953080    2894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:33:40.953294    2894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:33:40.954812    2894 out.go:352] Setting JSON to false
	I0816 05:33:40.977589    2894 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1398,"bootTime":1723810222,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0816 05:33:40.977699    2894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 05:33:40.998774    2894 out.go:177] * [functional-525000] minikube v1.33.1 sur Darwin 14.6.1
	I0816 05:33:41.040738    2894 out.go:177]   - MINIKUBE_LOCATION=19423
	I0816 05:33:41.040760    2894 notify.go:220] Checking for updates...
	I0816 05:33:41.082776    2894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	I0816 05:33:41.103601    2894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0816 05:33:41.124764    2894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 05:33:41.145938    2894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	I0816 05:33:41.166994    2894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 05:33:41.188409    2894 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:33:41.189095    2894 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:33:41.189200    2894 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:33:41.199037    2894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50712
	I0816 05:33:41.199406    2894 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:33:41.199805    2894 main.go:141] libmachine: Using API Version  1
	I0816 05:33:41.199819    2894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:33:41.200051    2894 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:33:41.200160    2894 main.go:141] libmachine: (functional-525000) Calling .DriverName
	I0816 05:33:41.200340    2894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0816 05:33:41.200599    2894 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:33:41.200625    2894 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:33:41.209117    2894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50714
	I0816 05:33:41.209475    2894 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:33:41.209798    2894 main.go:141] libmachine: Using API Version  1
	I0816 05:33:41.209806    2894 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:33:41.210037    2894 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:33:41.210168    2894 main.go:141] libmachine: (functional-525000) Calling .DriverName
	I0816 05:33:41.238747    2894 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0816 05:33:41.275920    2894 start.go:297] selected driver: hyperkit
	I0816 05:33:41.275950    2894 start.go:901] validating driver "hyperkit" against &{Name:functional-525000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-525000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 05:33:41.276151    2894 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 05:33:41.300730    2894 out.go:201] 
	W0816 05:33:41.324195    2894 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 05:33:41.345737    2894 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-525000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-525000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xv5wn" [dfcb9aed-f5d8-461d-814c-a88dbe92770c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xv5wn" [dfcb9aed-f5d8-461d-814c-a88dbe92770c] Running
E0816 05:33:27.983295    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004623628s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31257
functional_test.go:1675: http://192.169.0.4:31257: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-xv5wn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31257
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [98141464-5b22-4ae0-b3ab-58a242cf0b62] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005891279s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-525000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-525000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [51ab00d8-057d-4894-8e3c-184ce793b5c4] Pending
helpers_test.go:344: "sp-pod" [51ab00d8-057d-4894-8e3c-184ce793b5c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [51ab00d8-057d-4894-8e3c-184ce793b5c4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004457146s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-525000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-525000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8e80156a-a798-4610-9a0a-c0da5dbc8dbe] Pending
helpers_test.go:344: "sp-pod" [8e80156a-a798-4610-9a0a-c0da5dbc8dbe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8e80156a-a798-4610-9a0a-c0da5dbc8dbe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003142005s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-525000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp functional-525000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd1395469227/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -n functional-525000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-525000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vncff" [ad39a63a-4ca3-4a14-b3c0-844632738fce] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vncff" [ad39a63a-4ca3-4a14-b3c0-844632738fce] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00426485s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;": exit status 1 (147.759115ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;": exit status 1 (126.908863ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;": exit status 1 (102.914022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-525000 exec mysql-6cdb49bbb-vncff -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1554/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/test/nested/copy/1554/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1554.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/1554.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1554.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /usr/share/ca-certificates/1554.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/15542.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /usr/share/ca-certificates/15542.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-525000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "sudo systemctl is-active crio": exit status 1 (162.849974ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-525000
docker.io/kicbase/echo-server:functional-525000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format short --alsologtostderr:
I0816 05:33:51.457303    3060 out.go:345] Setting OutFile to fd 1 ...
I0816 05:33:51.457613    3060 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.457619    3060 out.go:358] Setting ErrFile to fd 2...
I0816 05:33:51.457623    3060 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.457791    3060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
I0816 05:33:51.458364    3060 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.458457    3060 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.458866    3060 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.458915    3060 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.467696    3060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50947
I0816 05:33:51.468146    3060 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.468598    3060 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.468613    3060 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.468839    3060 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.469002    3060 main.go:141] libmachine: (functional-525000) Calling .GetState
I0816 05:33:51.469123    3060 main.go:141] libmachine: (functional-525000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 05:33:51.469203    3060 main.go:141] libmachine: (functional-525000) DBG | hyperkit pid from json: 2061
I0816 05:33:51.470592    3060 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.470621    3060 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.479018    3060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50949
I0816 05:33:51.479406    3060 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.479739    3060 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.479750    3060 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.479995    3060 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.480137    3060 main.go:141] libmachine: (functional-525000) Calling .DriverName
I0816 05:33:51.480305    3060 ssh_runner.go:195] Run: systemctl --version
I0816 05:33:51.480328    3060 main.go:141] libmachine: (functional-525000) Calling .GetSSHHostname
I0816 05:33:51.480429    3060 main.go:141] libmachine: (functional-525000) Calling .GetSSHPort
I0816 05:33:51.480514    3060 main.go:141] libmachine: (functional-525000) Calling .GetSSHKeyPath
I0816 05:33:51.480598    3060 main.go:141] libmachine: (functional-525000) Calling .GetSSHUsername
I0816 05:33:51.480680    3060 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/functional-525000/id_rsa Username:docker}
I0816 05:33:51.513168    3060 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 05:33:51.543191    3060 main.go:141] libmachine: Making call to close driver server
I0816 05:33:51.543200    3060 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:51.543339    3060 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:51.543346    3060 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:51.543356    3060 main.go:141] libmachine: Making call to close driver server
I0816 05:33:51.543362    3060 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:51.543505    3060 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:51.543513    3060 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:51.543513    3060 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr
2024/08/16 05:33:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-525000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-525000 | f7120ac0a094d | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-525000 | 913e80ae3ac4c | 1.24MB |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format table --alsologtostderr:
I0816 05:33:54.555617    3086 out.go:345] Setting OutFile to fd 1 ...
I0816 05:33:54.555816    3086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:54.555822    3086 out.go:358] Setting ErrFile to fd 2...
I0816 05:33:54.555826    3086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:54.555994    3086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
I0816 05:33:54.556587    3086 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:54.556689    3086 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:54.557943    3086 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:54.558005    3086 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:54.566491    3086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50980
I0816 05:33:54.566936    3086 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:54.567357    3086 main.go:141] libmachine: Using API Version  1
I0816 05:33:54.567366    3086 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:54.567594    3086 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:54.567717    3086 main.go:141] libmachine: (functional-525000) Calling .GetState
I0816 05:33:54.567795    3086 main.go:141] libmachine: (functional-525000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 05:33:54.567884    3086 main.go:141] libmachine: (functional-525000) DBG | hyperkit pid from json: 2061
I0816 05:33:54.569178    3086 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:54.569199    3086 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:54.577847    3086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50982
I0816 05:33:54.578186    3086 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:54.578572    3086 main.go:141] libmachine: Using API Version  1
I0816 05:33:54.578590    3086 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:54.578845    3086 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:54.578982    3086 main.go:141] libmachine: (functional-525000) Calling .DriverName
I0816 05:33:54.579148    3086 ssh_runner.go:195] Run: systemctl --version
I0816 05:33:54.579169    3086 main.go:141] libmachine: (functional-525000) Calling .GetSSHHostname
I0816 05:33:54.579251    3086 main.go:141] libmachine: (functional-525000) Calling .GetSSHPort
I0816 05:33:54.579337    3086 main.go:141] libmachine: (functional-525000) Calling .GetSSHKeyPath
I0816 05:33:54.579417    3086 main.go:141] libmachine: (functional-525000) Calling .GetSSHUsername
I0816 05:33:54.579508    3086 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/functional-525000/id_rsa Username:docker}
I0816 05:33:54.613771    3086 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 05:33:54.633608    3086 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.633619    3086 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.633759    3086 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:54.633765    3086 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.633771    3086 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:54.633775    3086 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.633780    3086 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.633921    3086 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.633923    3086 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:54.633930    3086 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr:
[{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-525000"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"913e80ae3ac4ce623f7f851ed11c5e6719baac6dd5e90d6af66317201276c84f","repoDigests":[],"repoTags":["localhost/my-image:functional-525000"],"size":"1240000"},{"id":"f7120ac0a094df3adca112030ff85017c0bf7ec26dbd
8f4bcd4406b208067fc7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-525000"],"size":"30"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[]
,"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"i
d":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format json --alsologtostderr:
I0816 05:33:54.395182    3082 out.go:345] Setting OutFile to fd 1 ...
I0816 05:33:54.395461    3082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:54.395467    3082 out.go:358] Setting ErrFile to fd 2...
I0816 05:33:54.395471    3082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:54.395637    3082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
I0816 05:33:54.396243    3082 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:54.396336    3082 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:54.396671    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:54.396721    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:54.405096    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50974
I0816 05:33:54.405519    3082 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:54.405957    3082 main.go:141] libmachine: Using API Version  1
I0816 05:33:54.405989    3082 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:54.406231    3082 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:54.406369    3082 main.go:141] libmachine: (functional-525000) Calling .GetState
I0816 05:33:54.406465    3082 main.go:141] libmachine: (functional-525000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 05:33:54.406536    3082 main.go:141] libmachine: (functional-525000) DBG | hyperkit pid from json: 2061
I0816 05:33:54.407804    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:54.407827    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:54.416326    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50976
I0816 05:33:54.416701    3082 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:54.417074    3082 main.go:141] libmachine: Using API Version  1
I0816 05:33:54.417088    3082 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:54.417325    3082 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:54.417422    3082 main.go:141] libmachine: (functional-525000) Calling .DriverName
I0816 05:33:54.417614    3082 ssh_runner.go:195] Run: systemctl --version
I0816 05:33:54.417633    3082 main.go:141] libmachine: (functional-525000) Calling .GetSSHHostname
I0816 05:33:54.417707    3082 main.go:141] libmachine: (functional-525000) Calling .GetSSHPort
I0816 05:33:54.417835    3082 main.go:141] libmachine: (functional-525000) Calling .GetSSHKeyPath
I0816 05:33:54.417922    3082 main.go:141] libmachine: (functional-525000) Calling .GetSSHUsername
I0816 05:33:54.418007    3082 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/functional-525000/id_rsa Username:docker}
I0816 05:33:54.450487    3082 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 05:33:54.476162    3082 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.476170    3082 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.476348    3082 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:54.476356    3082 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.476366    3082 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:54.476373    3082 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.476378    3082 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.476569    3082 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.476566    3082 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:54.476581    3082 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: f7120ac0a094df3adca112030ff85017c0bf7ec26dbd8f4bcd4406b208067fc7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-525000
size: "30"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-525000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image ls --format yaml --alsologtostderr:
I0816 05:33:51.634360    3065 out.go:345] Setting OutFile to fd 1 ...
I0816 05:33:51.634648    3065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.634654    3065 out.go:358] Setting ErrFile to fd 2...
I0816 05:33:51.634657    3065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.634853    3065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
I0816 05:33:51.635459    3065 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.635554    3065 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.635911    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.635960    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.644398    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50952
I0816 05:33:51.644850    3065 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.645249    3065 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.645257    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.645483    3065 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.645606    3065 main.go:141] libmachine: (functional-525000) Calling .GetState
I0816 05:33:51.645730    3065 main.go:141] libmachine: (functional-525000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 05:33:51.645800    3065 main.go:141] libmachine: (functional-525000) DBG | hyperkit pid from json: 2061
I0816 05:33:51.647116    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.647138    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.655658    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50954
I0816 05:33:51.656022    3065 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.656365    3065 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.656384    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.656586    3065 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.656688    3065 main.go:141] libmachine: (functional-525000) Calling .DriverName
I0816 05:33:51.656858    3065 ssh_runner.go:195] Run: systemctl --version
I0816 05:33:51.656877    3065 main.go:141] libmachine: (functional-525000) Calling .GetSSHHostname
I0816 05:33:51.656963    3065 main.go:141] libmachine: (functional-525000) Calling .GetSSHPort
I0816 05:33:51.657049    3065 main.go:141] libmachine: (functional-525000) Calling .GetSSHKeyPath
I0816 05:33:51.657135    3065 main.go:141] libmachine: (functional-525000) Calling .GetSSHUsername
I0816 05:33:51.657219    3065 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/functional-525000/id_rsa Username:docker}
I0816 05:33:51.689337    3065 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0816 05:33:51.715132    3065 main.go:141] libmachine: Making call to close driver server
I0816 05:33:51.715145    3065 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:51.715323    3065 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:51.715330    3065 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:51.715338    3065 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:51.715347    3065 main.go:141] libmachine: Making call to close driver server
I0816 05:33:51.715351    3065 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:51.715481    3065 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:51.715491    3065 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:51.715518    3065 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh pgrep buildkitd: exit status 1 (127.870994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr: (2.285993429s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-525000 image build -t localhost/my-image:functional-525000 testdata/build --alsologtostderr:
I0816 05:33:51.921991    3074 out.go:345] Setting OutFile to fd 1 ...
I0816 05:33:51.922367    3074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.922373    3074 out.go:358] Setting ErrFile to fd 2...
I0816 05:33:51.922377    3074 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 05:33:51.922558    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
I0816 05:33:51.923186    3074 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.924413    3074 config.go:182] Loaded profile config "functional-525000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 05:33:51.924762    3074 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.924804    3074 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.933276    3074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50964
I0816 05:33:51.933680    3074 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.934090    3074 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.934102    3074 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.934385    3074 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.934510    3074 main.go:141] libmachine: (functional-525000) Calling .GetState
I0816 05:33:51.934593    3074 main.go:141] libmachine: (functional-525000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0816 05:33:51.934659    3074 main.go:141] libmachine: (functional-525000) DBG | hyperkit pid from json: 2061
I0816 05:33:51.935975    3074 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0816 05:33:51.936003    3074 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0816 05:33:51.944449    3074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50966
I0816 05:33:51.944795    3074 main.go:141] libmachine: () Calling .GetVersion
I0816 05:33:51.945170    3074 main.go:141] libmachine: Using API Version  1
I0816 05:33:51.945191    3074 main.go:141] libmachine: () Calling .SetConfigRaw
I0816 05:33:51.945393    3074 main.go:141] libmachine: () Calling .GetMachineName
I0816 05:33:51.945497    3074 main.go:141] libmachine: (functional-525000) Calling .DriverName
I0816 05:33:51.945661    3074 ssh_runner.go:195] Run: systemctl --version
I0816 05:33:51.945680    3074 main.go:141] libmachine: (functional-525000) Calling .GetSSHHostname
I0816 05:33:51.945762    3074 main.go:141] libmachine: (functional-525000) Calling .GetSSHPort
I0816 05:33:51.945840    3074 main.go:141] libmachine: (functional-525000) Calling .GetSSHKeyPath
I0816 05:33:51.945928    3074 main.go:141] libmachine: (functional-525000) Calling .GetSSHUsername
I0816 05:33:51.946006    3074 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/functional-525000/id_rsa Username:docker}
I0816 05:33:51.977616    3074 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1093976713.tar
I0816 05:33:51.977697    3074 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 05:33:51.989349    3074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1093976713.tar
I0816 05:33:51.992638    3074 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1093976713.tar: stat -c "%s %y" /var/lib/minikube/build/build.1093976713.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1093976713.tar': No such file or directory
I0816 05:33:51.992677    3074 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1093976713.tar --> /var/lib/minikube/build/build.1093976713.tar (3072 bytes)
I0816 05:33:52.032461    3074 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1093976713
I0816 05:33:52.046239    3074 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1093976713 -xf /var/lib/minikube/build/build.1093976713.tar
I0816 05:33:52.061830    3074 docker.go:360] Building image: /var/lib/minikube/build/build.1093976713
I0816 05:33:52.061900    3074 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-525000 /var/lib/minikube/build/build.1093976713
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:913e80ae3ac4ce623f7f851ed11c5e6719baac6dd5e90d6af66317201276c84f done
#8 naming to localhost/my-image:functional-525000 done
#8 DONE 0.0s
I0816 05:33:54.107564    3074 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-525000 /var/lib/minikube/build/build.1093976713: (2.04569002s)
I0816 05:33:54.107631    3074 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1093976713
I0816 05:33:54.117318    3074 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1093976713.tar
I0816 05:33:54.126872    3074 build_images.go:217] Built localhost/my-image:functional-525000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1093976713.tar
I0816 05:33:54.126897    3074 build_images.go:133] succeeded building to: functional-525000
I0816 05:33:54.126902    3074 build_images.go:134] failed building to: 
I0816 05:33:54.126936    3074 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.126945    3074 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.127159    3074 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.127168    3074 main.go:141] libmachine: Making call to close connection to plugin binary
I0816 05:33:54.127173    3074 main.go:141] libmachine: Making call to close driver server
I0816 05:33:54.127177    3074 main.go:141] libmachine: (functional-525000) Calling .Close
I0816 05:33:54.127359    3074 main.go:141] libmachine: Successfully made call to close driver server
I0816 05:33:54.127364    3074 main.go:141] libmachine: (functional-525000) DBG | Closing plugin on server side
I0816 05:33:54.127367    3074 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.722752777s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-525000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-525000 docker-env) && out/minikube-darwin-amd64 status -p functional-525000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-525000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon kicbase/echo-server:functional-525000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon kicbase/echo-server:functional-525000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-525000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load --daemon kicbase/echo-server:functional-525000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image save kicbase/echo-server:functional-525000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image rm kicbase/echo-server:functional-525000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-525000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 image save --daemon kicbase/echo-server:functional-525000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-525000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "194.642144ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "78.608158ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "177.105445ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "77.374837ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2789: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-525000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [29ed73e2-2cfc-4f23-924c-7347d4f37caa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [29ed73e2-2cfc-4f23-924c-7347d4f37caa] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.003855214s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-525000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.224.0 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-525000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-525000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-525000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-glgck" [9f988db4-8bea-429b-ad1c-9e914605667c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-glgck" [9f988db4-8bea-429b-ad1c-9e914605667c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004200597s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service list -o json
functional_test.go:1494: Took "770.917553ms" to run "out/minikube-darwin-amd64 -p functional-525000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31454
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31454
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1626516983/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723811621395018000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1626516983/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723811621395018000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1626516983/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723811621395018000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1626516983/001/test-1723811621395018000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.63038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 12:33 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 12:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 12:33 test-1723811621395018000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh cat /mount-9p/test-1723811621395018000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-525000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c62b30a0-0a26-4e66-bbd5-1d71f55c16ca] Pending
helpers_test.go:344: "busybox-mount" [c62b30a0-0a26-4e66-bbd5-1d71f55c16ca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c62b30a0-0a26-4e66-bbd5-1d71f55c16ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c62b30a0-0a26-4e66-bbd5-1d71f55c16ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002674383s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-525000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1626516983/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2539802596/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.654234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2539802596/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "sudo umount -f /mount-9p": exit status 1 (135.941896ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-525000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2539802596/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1: exit status 1 (170.307384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-525000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-525000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-525000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup664353400/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-525000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-525000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-525000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-073000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-073000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m23.61620446s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-073000 -- rollout status deployment/busybox: (2.624732811s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-65kjl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-mq4rd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-tbh6p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-65kjl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-mq4rd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-tbh6p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-65kjl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-mq4rd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-tbh6p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-65kjl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-65kjl -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-mq4rd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-mq4rd -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-tbh6p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-073000 -- exec busybox-7dff88458-tbh6p -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-073000 -v=7 --alsologtostderr
E0816 05:37:52.928306    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:52.935553    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:52.946762    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:52.968528    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:53.010308    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:53.093351    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:53.256078    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:53.577350    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:54.219855    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:55.502374    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:37:58.064955    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:38:03.187820    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:38:13.430063    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-073000 -v=7 --alsologtostderr: (52.462276599s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-073000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp testdata/cp-test.txt ha-073000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000:/home/docker/cp-test.txt ha-073000-m02:/home/docker/cp-test_ha-073000_ha-073000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test_ha-073000_ha-073000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000:/home/docker/cp-test.txt ha-073000-m03:/home/docker/cp-test_ha-073000_ha-073000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test_ha-073000_ha-073000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000:/home/docker/cp-test.txt ha-073000-m04:/home/docker/cp-test_ha-073000_ha-073000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test_ha-073000_ha-073000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp testdata/cp-test.txt ha-073000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m02:/home/docker/cp-test.txt ha-073000:/home/docker/cp-test_ha-073000-m02_ha-073000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test_ha-073000-m02_ha-073000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m02:/home/docker/cp-test.txt ha-073000-m03:/home/docker/cp-test_ha-073000-m02_ha-073000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test_ha-073000-m02_ha-073000-m03.txt"
E0816 05:38:27.976620    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m02:/home/docker/cp-test.txt ha-073000-m04:/home/docker/cp-test_ha-073000-m02_ha-073000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test_ha-073000-m02_ha-073000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp testdata/cp-test.txt ha-073000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m03:/home/docker/cp-test.txt ha-073000:/home/docker/cp-test_ha-073000-m03_ha-073000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test_ha-073000-m03_ha-073000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m03:/home/docker/cp-test.txt ha-073000-m02:/home/docker/cp-test_ha-073000-m03_ha-073000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test_ha-073000-m03_ha-073000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m03:/home/docker/cp-test.txt ha-073000-m04:/home/docker/cp-test_ha-073000-m03_ha-073000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test_ha-073000-m03_ha-073000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp testdata/cp-test.txt ha-073000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3689633976/001/cp-test_ha-073000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt ha-073000:/home/docker/cp-test_ha-073000-m04_ha-073000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000 "sudo cat /home/docker/cp-test_ha-073000-m04_ha-073000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt ha-073000-m02:/home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m02 "sudo cat /home/docker/cp-test_ha-073000-m04_ha-073000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 cp ha-073000-m04:/home/docker/cp-test.txt ha-073000-m03:/home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 ssh -n ha-073000-m03 "sudo cat /home/docker/cp-test_ha-073000-m04_ha-073000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 node stop m02 -v=7 --alsologtostderr
E0816 05:38:33.911643    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 node stop m02 -v=7 --alsologtostderr: (8.341413325s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr: exit status 7 (365.564659ms)

                                                
                                                
-- stdout --
	ha-073000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-073000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-073000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-073000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:38:41.295332    3552 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:38:41.295620    3552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:38:41.295626    3552 out.go:358] Setting ErrFile to fd 2...
	I0816 05:38:41.295630    3552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:38:41.295814    3552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:38:41.295985    3552 out.go:352] Setting JSON to false
	I0816 05:38:41.296011    3552 mustload.go:65] Loading cluster: ha-073000
	I0816 05:38:41.296046    3552 notify.go:220] Checking for updates...
	I0816 05:38:41.296335    3552 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:38:41.296350    3552 status.go:255] checking status of ha-073000 ...
	I0816 05:38:41.296738    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.296804    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.305985    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51712
	I0816 05:38:41.306390    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.306798    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.306838    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.307059    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.307171    3552 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:38:41.307246    3552 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:38:41.307335    3552 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3112
	I0816 05:38:41.308336    3552 status.go:330] ha-073000 host status = "Running" (err=<nil>)
	I0816 05:38:41.308355    3552 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:38:41.308594    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.308617    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.316999    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51714
	I0816 05:38:41.317376    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.317733    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.317754    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.317980    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.318086    3552 main.go:141] libmachine: (ha-073000) Calling .GetIP
	I0816 05:38:41.318164    3552 host.go:66] Checking if "ha-073000" exists ...
	I0816 05:38:41.318414    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.318438    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.331311    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51716
	I0816 05:38:41.331682    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.331995    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.332009    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.332228    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.332334    3552 main.go:141] libmachine: (ha-073000) Calling .DriverName
	I0816 05:38:41.332480    3552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:38:41.332500    3552 main.go:141] libmachine: (ha-073000) Calling .GetSSHHostname
	I0816 05:38:41.332578    3552 main.go:141] libmachine: (ha-073000) Calling .GetSSHPort
	I0816 05:38:41.332670    3552 main.go:141] libmachine: (ha-073000) Calling .GetSSHKeyPath
	I0816 05:38:41.332752    3552 main.go:141] libmachine: (ha-073000) Calling .GetSSHUsername
	I0816 05:38:41.332826    3552 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000/id_rsa Username:docker}
	I0816 05:38:41.367892    3552 ssh_runner.go:195] Run: systemctl --version
	I0816 05:38:41.372056    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:38:41.383917    3552 kubeconfig.go:125] found "ha-073000" server: "https://192.169.0.254:8443"
	I0816 05:38:41.383942    3552 api_server.go:166] Checking apiserver status ...
	I0816 05:38:41.383981    3552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:38:41.399323    3552 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1981/cgroup
	W0816 05:38:41.407197    3552 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1981/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:38:41.407241    3552 ssh_runner.go:195] Run: ls
	I0816 05:38:41.410544    3552 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 05:38:41.414590    3552 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 05:38:41.414602    3552 status.go:422] ha-073000 apiserver status = Running (err=<nil>)
	I0816 05:38:41.414611    3552 status.go:257] ha-073000 status: &{Name:ha-073000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:38:41.414622    3552 status.go:255] checking status of ha-073000-m02 ...
	I0816 05:38:41.414892    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.414915    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.423545    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51720
	I0816 05:38:41.423916    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.424308    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.424328    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.424532    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.424635    3552 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:38:41.424714    3552 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:38:41.424781    3552 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3129
	I0816 05:38:41.425739    3552 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3129 missing from process table
	I0816 05:38:41.425775    3552 status.go:330] ha-073000-m02 host status = "Stopped" (err=<nil>)
	I0816 05:38:41.425782    3552 status.go:343] host is not running, skipping remaining checks
	I0816 05:38:41.425789    3552 status.go:257] ha-073000-m02 status: &{Name:ha-073000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:38:41.425799    3552 status.go:255] checking status of ha-073000-m03 ...
	I0816 05:38:41.426065    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.426087    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.434536    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51722
	I0816 05:38:41.434890    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.435206    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.435216    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.435426    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.435529    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetState
	I0816 05:38:41.435615    3552 main.go:141] libmachine: (ha-073000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:38:41.435692    3552 main.go:141] libmachine: (ha-073000-m03) DBG | hyperkit pid from json: 3138
	I0816 05:38:41.436667    3552 status.go:330] ha-073000-m03 host status = "Running" (err=<nil>)
	I0816 05:38:41.436677    3552 host.go:66] Checking if "ha-073000-m03" exists ...
	I0816 05:38:41.436918    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.436946    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.445589    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51724
	I0816 05:38:41.445950    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.446301    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.446319    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.446557    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.446664    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetIP
	I0816 05:38:41.446750    3552 host.go:66] Checking if "ha-073000-m03" exists ...
	I0816 05:38:41.447010    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.447034    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.455517    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51726
	I0816 05:38:41.455886    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.456246    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.456265    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.456467    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.456581    3552 main.go:141] libmachine: (ha-073000-m03) Calling .DriverName
	I0816 05:38:41.456714    3552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:38:41.456725    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetSSHHostname
	I0816 05:38:41.456795    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetSSHPort
	I0816 05:38:41.456880    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetSSHKeyPath
	I0816 05:38:41.456960    3552 main.go:141] libmachine: (ha-073000-m03) Calling .GetSSHUsername
	I0816 05:38:41.457030    3552 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m03/id_rsa Username:docker}
	I0816 05:38:41.490946    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:38:41.502822    3552 kubeconfig.go:125] found "ha-073000" server: "https://192.169.0.254:8443"
	I0816 05:38:41.502837    3552 api_server.go:166] Checking apiserver status ...
	I0816 05:38:41.502881    3552 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:38:41.514525    3552 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup
	W0816 05:38:41.522642    3552 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2037/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:38:41.522700    3552 ssh_runner.go:195] Run: ls
	I0816 05:38:41.525942    3552 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0816 05:38:41.529109    3552 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0816 05:38:41.529121    3552 status.go:422] ha-073000-m03 apiserver status = Running (err=<nil>)
	I0816 05:38:41.529129    3552 status.go:257] ha-073000-m03 status: &{Name:ha-073000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:38:41.529138    3552 status.go:255] checking status of ha-073000-m04 ...
	I0816 05:38:41.529384    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.529404    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.537848    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51730
	I0816 05:38:41.538229    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.538592    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.538607    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.538850    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.538946    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:38:41.539023    3552 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:38:41.539110    3552 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3229
	I0816 05:38:41.540098    3552 status.go:330] ha-073000-m04 host status = "Running" (err=<nil>)
	I0816 05:38:41.540106    3552 host.go:66] Checking if "ha-073000-m04" exists ...
	I0816 05:38:41.540353    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.540378    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.548679    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51732
	I0816 05:38:41.549018    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.549369    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.549387    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.549614    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.549724    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetIP
	I0816 05:38:41.549816    3552 host.go:66] Checking if "ha-073000-m04" exists ...
	I0816 05:38:41.550063    3552 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:38:41.550087    3552 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:38:41.558436    3552 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51734
	I0816 05:38:41.558793    3552 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:38:41.559122    3552 main.go:141] libmachine: Using API Version  1
	I0816 05:38:41.559131    3552 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:38:41.559312    3552 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:38:41.559416    3552 main.go:141] libmachine: (ha-073000-m04) Calling .DriverName
	I0816 05:38:41.559555    3552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:38:41.559567    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHHostname
	I0816 05:38:41.559648    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHPort
	I0816 05:38:41.559724    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHKeyPath
	I0816 05:38:41.559796    3552 main.go:141] libmachine: (ha-073000-m04) Calling .GetSSHUsername
	I0816 05:38:41.559895    3552 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/ha-073000-m04/id_rsa Username:docker}
	I0816 05:38:41.593083    3552 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:38:41.603554    3552 status.go:257] ha-073000-m04 status: &{Name:ha-073000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 node start m02 -v=7 --alsologtostderr
E0816 05:39:14.874324    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 node start m02 -v=7 --alsologtostderr: (36.985519503s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-073000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-073000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-073000 -v=7 --alsologtostderr: (27.032096318s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-073000 --wait=true -v=7 --alsologtostderr
E0816 05:40:36.794175    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-073000 --wait=true -v=7 --alsologtostderr: (2m45.065344557s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-073000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 node delete m03 -v=7 --alsologtostderr: (6.933888585s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 stop -v=7 --alsologtostderr
E0816 05:42:52.938627    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-073000 stop -v=7 --alsologtostderr: (24.873206917s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-073000 status -v=7 --alsologtostderr: exit status 7 (93.868861ms)

                                                
                                                
-- stdout --
	ha-073000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-073000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-073000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:43:04.472718    3695 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:43:04.472991    3695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.472996    3695 out.go:358] Setting ErrFile to fd 2...
	I0816 05:43:04.473000    3695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:43:04.473181    3695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:43:04.473361    3695 out.go:352] Setting JSON to false
	I0816 05:43:04.473383    3695 mustload.go:65] Loading cluster: ha-073000
	I0816 05:43:04.473420    3695 notify.go:220] Checking for updates...
	I0816 05:43:04.473676    3695 config.go:182] Loaded profile config "ha-073000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:43:04.473692    3695 status.go:255] checking status of ha-073000 ...
	I0816 05:43:04.474094    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.474156    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.484087    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52043
	I0816 05:43:04.484589    3695 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.485143    3695 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.485179    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.485502    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.485675    3695 main.go:141] libmachine: (ha-073000) Calling .GetState
	I0816 05:43:04.485807    3695 main.go:141] libmachine: (ha-073000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.485893    3695 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid from json: 3625
	I0816 05:43:04.486827    3695 main.go:141] libmachine: (ha-073000) DBG | hyperkit pid 3625 missing from process table
	I0816 05:43:04.486858    3695 status.go:330] ha-073000 host status = "Stopped" (err=<nil>)
	I0816 05:43:04.486868    3695 status.go:343] host is not running, skipping remaining checks
	I0816 05:43:04.486875    3695 status.go:257] ha-073000 status: &{Name:ha-073000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:43:04.486898    3695 status.go:255] checking status of ha-073000-m02 ...
	I0816 05:43:04.487133    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.487153    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.495343    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52045
	I0816 05:43:04.495656    3695 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.496005    3695 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.496021    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.496230    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.496365    3695 main.go:141] libmachine: (ha-073000-m02) Calling .GetState
	I0816 05:43:04.496475    3695 main.go:141] libmachine: (ha-073000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.496517    3695 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid from json: 3630
	I0816 05:43:04.497414    3695 main.go:141] libmachine: (ha-073000-m02) DBG | hyperkit pid 3630 missing from process table
	I0816 05:43:04.497454    3695 status.go:330] ha-073000-m02 host status = "Stopped" (err=<nil>)
	I0816 05:43:04.497464    3695 status.go:343] host is not running, skipping remaining checks
	I0816 05:43:04.497471    3695 status.go:257] ha-073000-m02 status: &{Name:ha-073000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:43:04.497495    3695 status.go:255] checking status of ha-073000-m04 ...
	I0816 05:43:04.497780    3695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:43:04.497813    3695 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:43:04.507510    3695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52047
	I0816 05:43:04.507843    3695 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:43:04.508154    3695 main.go:141] libmachine: Using API Version  1
	I0816 05:43:04.508163    3695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:43:04.508370    3695 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:43:04.508476    3695 main.go:141] libmachine: (ha-073000-m04) Calling .GetState
	I0816 05:43:04.508552    3695 main.go:141] libmachine: (ha-073000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:43:04.508619    3695 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid from json: 3643
	I0816 05:43:04.509516    3695 main.go:141] libmachine: (ha-073000-m04) DBG | hyperkit pid 3643 missing from process table
	I0816 05:43:04.509538    3695 status.go:330] ha-073000-m04 host status = "Stopped" (err=<nil>)
	I0816 05:43:04.509546    3695 status.go:343] host is not running, skipping remaining checks
	I0816 05:43:04.509552    3695 status.go:257] ha-073000-m04 status: &{Name:ha-073000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.34s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-979000 --driver=hyperkit 
E0816 05:47:52.934064    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-979000 --driver=hyperkit : (37.964853744s)
--- PASS: TestImageBuild/serial/Setup (37.96s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-979000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-979000: (1.878176416s)
--- PASS: TestImageBuild/serial/NormalBuild (1.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-979000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-979000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-979000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-868000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-868000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m17.480790151s)
--- PASS: TestJSONOutput/start/Command (77.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-868000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-868000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-868000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-868000 --output=json --user=testUser: (8.332406818s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-355000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-355000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.30051ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5e5316f-eea7-424c-86ea-1cb2f2348e36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-355000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9dd9be4-b3ab-496f-b8c1-0f7900876550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"533f67d2-4ccc-4716-bdf9-b2d343319aa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig"}}
	{"specversion":"1.0","id":"38b2091d-7810-4bda-9487-fd1c86aa4302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ffa4d517-6165-4882-98a2-c3264719d002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9a17a01-5a3a-4795-8035-52a89b382e60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube"}}
	{"specversion":"1.0","id":"1ab67466-de1b-4248-9506-c3d3a506b5e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"426d28ed-9dd6-456e-ace3-369db0efdcd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-355000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.1s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.10s)

                                                
                                    
x
+
TestMinikubeProfile (90.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-216000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-216000 --driver=hyperkit : (40.598267633s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-218000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-218000 --driver=hyperkit : (40.699570293s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-216000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-218000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-218000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-218000: (3.410455121s)
helpers_test.go:175: Cleaning up "first-216000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-216000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-216000: (5.245774261s)
--- PASS: TestMinikubeProfile (90.70s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-120000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0816 05:54:16.002365    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-120000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m45.590831917s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-120000 -- rollout status deployment/busybox: (2.80796659s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fqvsk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fx8zx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fqvsk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fx8zx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fqvsk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fx8zx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fqvsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fqvsk -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fx8zx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-120000 -- exec busybox-7dff88458-fx8zx -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-120000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-120000 -v 3 --alsologtostderr: (45.316857759s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-120000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp testdata/cp-test.txt multinode-120000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4207728907/001/cp-test_multinode-120000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000:/home/docker/cp-test.txt multinode-120000-m02:/home/docker/cp-test_multinode-120000_multinode-120000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test_multinode-120000_multinode-120000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000:/home/docker/cp-test.txt multinode-120000-m03:/home/docker/cp-test_multinode-120000_multinode-120000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test_multinode-120000_multinode-120000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp testdata/cp-test.txt multinode-120000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4207728907/001/cp-test_multinode-120000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m02:/home/docker/cp-test.txt multinode-120000:/home/docker/cp-test_multinode-120000-m02_multinode-120000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test_multinode-120000-m02_multinode-120000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m02:/home/docker/cp-test.txt multinode-120000-m03:/home/docker/cp-test_multinode-120000-m02_multinode-120000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test_multinode-120000-m02_multinode-120000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp testdata/cp-test.txt multinode-120000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4207728907/001/cp-test_multinode-120000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt multinode-120000:/home/docker/cp-test_multinode-120000-m03_multinode-120000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000 "sudo cat /home/docker/cp-test_multinode-120000-m03_multinode-120000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 cp multinode-120000-m03:/home/docker/cp-test.txt multinode-120000-m02:/home/docker/cp-test_multinode-120000-m03_multinode-120000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 ssh -n multinode-120000-m02 "sudo cat /home/docker/cp-test_multinode-120000-m03_multinode-120000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-120000 node stop m03: (2.32892856s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-120000 status: exit status 7 (254.839189ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-120000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-120000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr: exit status 7 (257.595867ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-120000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-120000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 05:56:38.546802    4381 out.go:345] Setting OutFile to fd 1 ...
	I0816 05:56:38.547083    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:56:38.547089    4381 out.go:358] Setting ErrFile to fd 2...
	I0816 05:56:38.547092    4381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 05:56:38.547268    4381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 05:56:38.547450    4381 out.go:352] Setting JSON to false
	I0816 05:56:38.547472    4381 mustload.go:65] Loading cluster: multinode-120000
	I0816 05:56:38.547517    4381 notify.go:220] Checking for updates...
	I0816 05:56:38.547785    4381 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 05:56:38.547801    4381 status.go:255] checking status of multinode-120000 ...
	I0816 05:56:38.548152    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.548210    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.556993    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53029
	I0816 05:56:38.557345    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.557756    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.557766    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.558027    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.558155    4381 main.go:141] libmachine: (multinode-120000) Calling .GetState
	I0816 05:56:38.558250    4381 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:56:38.558318    4381 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4092
	I0816 05:56:38.559500    4381 status.go:330] multinode-120000 host status = "Running" (err=<nil>)
	I0816 05:56:38.559521    4381 host.go:66] Checking if "multinode-120000" exists ...
	I0816 05:56:38.559764    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.559784    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.568157    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53031
	I0816 05:56:38.568480    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.568792    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.568819    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.569029    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.569147    4381 main.go:141] libmachine: (multinode-120000) Calling .GetIP
	I0816 05:56:38.569230    4381 host.go:66] Checking if "multinode-120000" exists ...
	I0816 05:56:38.569492    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.569519    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.581491    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53033
	I0816 05:56:38.581861    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.582184    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.582196    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.582401    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.582557    4381 main.go:141] libmachine: (multinode-120000) Calling .DriverName
	I0816 05:56:38.582724    4381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:56:38.582746    4381 main.go:141] libmachine: (multinode-120000) Calling .GetSSHHostname
	I0816 05:56:38.582822    4381 main.go:141] libmachine: (multinode-120000) Calling .GetSSHPort
	I0816 05:56:38.582902    4381 main.go:141] libmachine: (multinode-120000) Calling .GetSSHKeyPath
	I0816 05:56:38.582988    4381 main.go:141] libmachine: (multinode-120000) Calling .GetSSHUsername
	I0816 05:56:38.583065    4381 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000/id_rsa Username:docker}
	I0816 05:56:38.619644    4381 ssh_runner.go:195] Run: systemctl --version
	I0816 05:56:38.624139    4381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:56:38.639512    4381 kubeconfig.go:125] found "multinode-120000" server: "https://192.169.0.14:8443"
	I0816 05:56:38.639537    4381 api_server.go:166] Checking apiserver status ...
	I0816 05:56:38.639575    4381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 05:56:38.650691    4381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup
	W0816 05:56:38.657964    4381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0816 05:56:38.658007    4381 ssh_runner.go:195] Run: ls
	I0816 05:56:38.661210    4381 api_server.go:253] Checking apiserver healthz at https://192.169.0.14:8443/healthz ...
	I0816 05:56:38.664213    4381 api_server.go:279] https://192.169.0.14:8443/healthz returned 200:
	ok
	I0816 05:56:38.664223    4381 status.go:422] multinode-120000 apiserver status = Running (err=<nil>)
	I0816 05:56:38.664232    4381 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:56:38.664244    4381 status.go:255] checking status of multinode-120000-m02 ...
	I0816 05:56:38.664493    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.664513    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.673167    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53037
	I0816 05:56:38.673526    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.673883    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.673893    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.674125    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.674254    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetState
	I0816 05:56:38.674333    4381 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:56:38.674403    4381 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4110
	I0816 05:56:38.675557    4381 status.go:330] multinode-120000-m02 host status = "Running" (err=<nil>)
	I0816 05:56:38.675566    4381 host.go:66] Checking if "multinode-120000-m02" exists ...
	I0816 05:56:38.675810    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.675831    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.684242    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53039
	I0816 05:56:38.684578    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.684889    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.684904    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.685098    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.685201    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetIP
	I0816 05:56:38.685279    4381 host.go:66] Checking if "multinode-120000-m02" exists ...
	I0816 05:56:38.685530    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.685549    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.693988    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53041
	I0816 05:56:38.694327    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.694648    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.694657    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.694870    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.694986    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .DriverName
	I0816 05:56:38.695129    4381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 05:56:38.695140    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHHostname
	I0816 05:56:38.695224    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHPort
	I0816 05:56:38.695324    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHKeyPath
	I0816 05:56:38.695412    4381 main.go:141] libmachine: (multinode-120000-m02) Calling .GetSSHUsername
	I0816 05:56:38.695543    4381 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1009/.minikube/machines/multinode-120000-m02/id_rsa Username:docker}
	I0816 05:56:38.726759    4381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 05:56:38.736801    4381 status.go:257] multinode-120000-m02 status: &{Name:multinode-120000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 05:56:38.736817    4381 status.go:255] checking status of multinode-120000-m03 ...
	I0816 05:56:38.737095    4381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 05:56:38.737118    4381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 05:56:38.745740    4381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53044
	I0816 05:56:38.746142    4381 main.go:141] libmachine: () Calling .GetVersion
	I0816 05:56:38.746492    4381 main.go:141] libmachine: Using API Version  1
	I0816 05:56:38.746508    4381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 05:56:38.746753    4381 main.go:141] libmachine: () Calling .GetMachineName
	I0816 05:56:38.746869    4381 main.go:141] libmachine: (multinode-120000-m03) Calling .GetState
	I0816 05:56:38.746961    4381 main.go:141] libmachine: (multinode-120000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 05:56:38.747040    4381 main.go:141] libmachine: (multinode-120000-m03) DBG | hyperkit pid from json: 4177
	I0816 05:56:38.748173    4381 main.go:141] libmachine: (multinode-120000-m03) DBG | hyperkit pid 4177 missing from process table
	I0816 05:56:38.748205    4381 status.go:330] multinode-120000-m03 host status = "Stopped" (err=<nil>)
	I0816 05:56:38.748213    4381 status.go:343] host is not running, skipping remaining checks
	I0816 05:56:38.748220    4381 status.go:257] multinode-120000-m03 status: &{Name:multinode-120000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-120000 node start m03 -v=7 --alsologtostderr: (41.320027712s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (139.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-120000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-120000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-120000: (18.894383411s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr
E0816 05:57:52.922951    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 05:58:27.971135    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-120000 --wait=true -v=8 --alsologtostderr: (2m0.846029883s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-120000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (139.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-120000 node delete m03: (2.944270178s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-120000 stop: (16.636122176s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-120000 status: exit status 7 (83.782409ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-120000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-120000 status --alsologtostderr: exit status 7 (79.430248ms)

                                                
                                                
-- stdout --
	multinode-120000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-120000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 06:00:00.358454    4491 out.go:345] Setting OutFile to fd 1 ...
	I0816 06:00:00.358645    4491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.358650    4491 out.go:358] Setting ErrFile to fd 2...
	I0816 06:00:00.358654    4491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 06:00:00.358824    4491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1009/.minikube/bin
	I0816 06:00:00.359009    4491 out.go:352] Setting JSON to false
	I0816 06:00:00.359032    4491 mustload.go:65] Loading cluster: multinode-120000
	I0816 06:00:00.359075    4491 notify.go:220] Checking for updates...
	I0816 06:00:00.359344    4491 config.go:182] Loaded profile config "multinode-120000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 06:00:00.359359    4491 status.go:255] checking status of multinode-120000 ...
	I0816 06:00:00.359698    4491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.359750    4491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.368856    4491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53274
	I0816 06:00:00.369174    4491 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.369608    4491 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.369619    4491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.369833    4491 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.369955    4491 main.go:141] libmachine: (multinode-120000) Calling .GetState
	I0816 06:00:00.370052    4491 main.go:141] libmachine: (multinode-120000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.370123    4491 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid from json: 4436
	I0816 06:00:00.370999    4491 main.go:141] libmachine: (multinode-120000) DBG | hyperkit pid 4436 missing from process table
	I0816 06:00:00.371025    4491 status.go:330] multinode-120000 host status = "Stopped" (err=<nil>)
	I0816 06:00:00.371034    4491 status.go:343] host is not running, skipping remaining checks
	I0816 06:00:00.371040    4491 status.go:257] multinode-120000 status: &{Name:multinode-120000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 06:00:00.371063    4491 status.go:255] checking status of multinode-120000-m02 ...
	I0816 06:00:00.371317    4491 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0816 06:00:00.371338    4491 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0816 06:00:00.379708    4491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53276
	I0816 06:00:00.380040    4491 main.go:141] libmachine: () Calling .GetVersion
	I0816 06:00:00.380369    4491 main.go:141] libmachine: Using API Version  1
	I0816 06:00:00.380378    4491 main.go:141] libmachine: () Calling .SetConfigRaw
	I0816 06:00:00.380581    4491 main.go:141] libmachine: () Calling .GetMachineName
	I0816 06:00:00.380708    4491 main.go:141] libmachine: (multinode-120000-m02) Calling .GetState
	I0816 06:00:00.380798    4491 main.go:141] libmachine: (multinode-120000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0816 06:00:00.380874    4491 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid from json: 4443
	I0816 06:00:00.381735    4491 main.go:141] libmachine: (multinode-120000-m02) DBG | hyperkit pid 4443 missing from process table
	I0816 06:00:00.381791    4491 status.go:330] multinode-120000-m02 host status = "Stopped" (err=<nil>)
	I0816 06:00:00.381801    4491 status.go:343] host is not running, skipping remaining checks
	I0816 06:00:00.381808    4491 status.go:257] multinode-120000-m02 status: &{Name:multinode-120000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-120000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-120000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-120000-m02 --driver=hyperkit : exit status 14 (459.918692ms)

                                                
                                                
-- stdout --
	* [multinode-120000-m02] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-120000-m02' is duplicated with machine name 'multinode-120000-m02' in profile 'multinode-120000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-120000-m03 --driver=hyperkit 
E0816 06:02:52.918085    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-120000-m03 --driver=hyperkit : (40.440828742s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-120000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-120000: exit status 80 (282.822591ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-120000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-120000-m03 already exists in multinode-120000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-120000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-120000-m03: (3.431188457s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.67s)

                                                
                                    
x
+
TestPreload (172.78s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0816 06:03:27.969022    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m48.423684397s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-308000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-308000 image pull gcr.io/k8s-minikube/busybox: (1.250586951s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-308000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-308000: (8.39087337s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (49.320968652s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-308000 image list
helpers_test.go:175: Cleaning up "test-preload-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-308000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-308000: (5.242349256s)
--- PASS: TestPreload (172.78s)

                                                
                                    
x
+
TestSkaffold (110.11s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4216633981 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4216633981 version: (1.688960723s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-975000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-975000 --memory=2600 --driver=hyperkit : (35.643665954s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4216633981 run --minikube-profile skaffold-975000 --kube-context skaffold-975000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe4216633981 run --minikube-profile skaffold-975000 --kube-context skaffold-975000 --status-check=true --port-forward=false --interactive=false: (55.269477863s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7cb78d6757-8bzfc" [f5085237-4071-4cab-9930-f312de7754c9] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004922607s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-67449cd47f-44lhb" [2a4c993a-4e1f-4645-861a-78323375f3a1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011313174s
helpers_test.go:175: Cleaning up "skaffold-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-975000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-975000: (5.246279441s)
--- PASS: TestSkaffold (110.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1530997620 start -p running-upgrade-347000 --memory=2200 --vm-driver=hyperkit 
E0816 06:23:27.938534    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1530997620 start -p running-upgrade-347000 --memory=2200 --vm-driver=hyperkit : (53.687573034s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-347000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-347000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (31.721662341s)
helpers_test.go:175: Cleaning up "running-upgrade-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-347000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-347000: (5.240348693s)
--- PASS: TestRunningBinaryUpgrade (91.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (1369.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (50.791190024s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-874000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-874000: (2.371544793s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-874000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-874000 status --format={{.Host}}: exit status 7 (68.419638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0816 06:27:36.001420    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:27:52.918758    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:28:27.966791    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:30:04.482388    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:31:27.563219    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:32:52.912360    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:33:27.960844    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:34:51.051627    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:35:04.474869    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (11m6.110124703s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-874000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (599.649348ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-874000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-874000
	    minikube start -p kubernetes-upgrade-874000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8740002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-874000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0816 06:37:52.903654    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:38:27.951551    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:40:04.466741    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:42:53.039533    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:43:28.087656    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:44:16.123415    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:45:04.606323    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-874000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m44.219843062s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-874000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-874000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-874000: (5.258119197s)
--- PASS: TestKubernetesUpgrade (1369.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3427116189/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3427116189/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3427116189/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3427116189/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.15s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2120304099/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2120304099/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2120304099/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2120304099/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.294745772 start -p stopped-upgrade-811000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.294745772 start -p stopped-upgrade-811000 --memory=2200 --vm-driver=hyperkit : (39.651059542s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.294745772 -p stopped-upgrade-811000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.294745772 -p stopped-upgrade-811000 stop: (8.249424345s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-811000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0816 06:47:53.042693    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:48:07.694446    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:48:28.091543    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-811000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m11.300132872s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-811000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-811000: (2.528454142s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (468.403354ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-661000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1009/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1009/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-661000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-661000 --driver=hyperkit : (1m14.524750151s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-661000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E0816 06:50:04.609002    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m35.98102772s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --driver=hyperkit : (6.171139699s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-661000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-661000 status -o json: exit status 2 (154.231144ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-661000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-661000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-661000: (2.484754174s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-661000 --no-kubernetes --driver=hyperkit : (22.202958352s)
--- PASS: TestNoKubernetes/serial/Start (22.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-661000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-661000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (134.040867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-661000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-661000: (2.384287225s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-661000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-661000 --driver=hyperkit : (19.317875917s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-661000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-661000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (123.559592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m1.861835736s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hsxxf" [9eb81dd9-fc38-44b0-b976-14bc36724fe9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hsxxf" [9eb81dd9-fc38-44b0-b976-14bc36724fe9] Running
E0816 06:51:31.188754    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005731691s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m10.03023991s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2s4jl" [623a51f1-f426-4f0a-92e5-f48a218e5137] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.002775948s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x5npv" [ee8f119c-ecf5-4e42-b94d-39bf764aaf8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x5npv" [ee8f119c-ecf5-4e42-b94d-39bf764aaf8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004243075s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (51.610652651s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z4d5t" [64877663-c104-4137-94f8-86bd27fa1e30] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004753459s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fzlhx" [cd6a0491-0c9c-455e-b824-538f00796093] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fzlhx" [cd6a0491-0c9c-455e-b824-538f00796093] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004760975s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4m2q9" [1f9253ee-3e7e-4a6c-8185-9ccd1858f7fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4m2q9" [1f9253ee-3e7e-4a6c-8185-9ccd1858f7fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005002112s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (1m18.967702147s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9zc58" [afe4b30a-1da8-4dc8-855b-b8be417a384e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9zc58" [afe4b30a-1da8-4dc8-855b-b8be417a384e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004367307s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m3.594529168s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0816 06:56:25.029793    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.036330    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.048059    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.069717    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.112617    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.194658    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.357108    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:25.678882    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:26.321956    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:27.605409    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:30.166818    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:35.288124    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:56:45.530900    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:06.012501    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m7.249739008s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lwz5j" [cc8fc024-e0e0-4b36-bf3c-b7f1d6d1c984] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004337658s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rh6lk" [663efe5b-73ba-4a2e-81bc-3fe29fd25afd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rh6lk" [663efe5b-73ba-4a2e-81bc-3fe29fd25afd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00484232s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nsc6f" [9b79d454-8cc3-4509-8374-2f0a0a25e88d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nsc6f" [9b79d454-8cc3-4509-8374-2f0a0a25e88d] Running
E0816 06:57:22.052106    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.059309    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.071663    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.094821    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.137123    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.218790    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:22.381797    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005260338s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (25.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-199000 exec deployment/netcat -- nslookup kubernetes.default
E0816 06:57:22.703488    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:23.346489    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-199000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.107944811s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-199000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-199000 exec deployment/netcat -- nslookup kubernetes.default: (10.123012457s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (25.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (50.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0816 06:57:42.556303    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:57:46.974932    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-199000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (50.150627765s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (50.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0816 06:58:05.980118    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:58:08.542900    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:58:13.664393    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:58:23.906510    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:58:28.096451    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m24.051626622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-199000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-199000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l2jtf" [175ba3a3-7316-40ea-a60a-0cbac05e2a77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l2jtf" [175ba3a3-7316-40ea-a60a-0cbac05e2a77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.00604902s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-199000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-199000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0816 07:13:45.185190    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:13:49.064462    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 06:59:08.897919    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:59:09.506010    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:59:25.351065    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 06:59:29.987716    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0: (55.113485055s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-794000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [90e2a66d-6caa-40fd-912d-801314c801b1] Pending
helpers_test.go:344: "busybox" [90e2a66d-6caa-40fd-912d-801314c801b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [90e2a66d-6caa-40fd-912d-801314c801b1] Running
E0816 07:00:04.612339    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005708105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-794000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-794000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-794000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-794000 --alsologtostderr -v=3
E0816 07:00:05.923704    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:10.950887    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-794000 --alsologtostderr -v=3: (8.399412505s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-794000 -n no-preload-794000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-794000 -n no-preload-794000: exit status 7 (68.606001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-794000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (311.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-794000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0: (5m11.791870616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-794000 -n no-preload-794000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (311.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-204000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b688d64d-3e2e-4881-9747-5ffa693586c5] Pending
helpers_test.go:344: "busybox" [b688d64d-3e2e-4881-9747-5ffa693586c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b688d64d-3e2e-4881-9747-5ffa693586c5] Running
E0816 07:00:37.456403    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.463431    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.475603    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.497445    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.540622    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.622900    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:37.785733    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:38.108641    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:38.750143    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003233591s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-204000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-204000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0816 07:00:40.031996    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-204000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-204000 --alsologtostderr -v=3
E0816 07:00:42.597888    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:47.288231    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:47.731707    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-204000 --alsologtostderr -v=3: (8.414247149s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 7 (71.314323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-204000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (404.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0816 07:00:56.161998    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:00:57.989643    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:01:18.484510    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:01:25.074871    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:01:32.917944    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:01:52.786661    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:01:59.450747    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.174639    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.181959    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.193665    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.215887    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.258961    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.341334    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.504459    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:07.826140    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:08.467626    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:09.750423    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.313557    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.441699    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.449429    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.460938    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.483445    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.526109    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.607741    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:12.769993    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:13.093528    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:13.735012    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:15.016840    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:17.436420    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:17.580466    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:22.100675    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:22.702587    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:27.678266    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:32.944491    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:48.161384    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:49.813349    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:53.095887    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:02:53.426846    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:03.452956    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:21.374308    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:28.144821    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:29.123638    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:31.165529    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.187455    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.194205    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.205601    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.228104    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.269851    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.352256    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.514186    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:32.837608    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:33.480081    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:34.390167    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:34.761677    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:37.323710    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:42.445455    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:49.055746    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:03:52.687178    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:13.169158    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:16.764647    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:47.753279    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:51.046886    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:54.132210    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:04:56.313312    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:05:04.662400    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-204000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m43.862715519s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-204000 -n old-k8s-version-204000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (404.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-npjk5" [2b7ff146-8769-4dcb-bf0d-b2309654d46f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004784465s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-npjk5" [2b7ff146-8769-4dcb-bf0d-b2309654d46f] Running
E0816 07:05:37.507382    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00419766s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-794000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-794000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-794000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-794000 -n no-preload-794000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-794000 -n no-preload-794000: exit status 2 (162.797718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-794000 -n no-preload-794000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-794000 -n no-preload-794000: exit status 2 (162.729563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-794000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-794000 -n no-preload-794000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-794000 -n no-preload-794000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-388000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:06:05.220442    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:06:16.055955    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:06:25.083327    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-388000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0: (52.992044031s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-388000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f8bcaf7-b125-4f02-996c-7eef56411b52] Pending
helpers_test.go:344: "busybox" [1f8bcaf7-b125-4f02-996c-7eef56411b52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f8bcaf7-b125-4f02-996c-7eef56411b52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004967769s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-388000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-388000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-388000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-388000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-388000 --alsologtostderr -v=3: (8.410328351s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-388000 -n embed-certs-388000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-388000 -n embed-certs-388000: exit status 7 (68.523667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-388000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (293.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-388000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:07:07.180160    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:07:12.446439    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:07:22.106271    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-388000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.0: (4m53.427973075s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-388000 -n embed-certs-388000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (293.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sccm6" [1e5d6bb9-5c9a-4d17-a5e0-337b3d820964] Running
E0816 07:07:34.890821    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002675824s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sccm6" [1e5d6bb9-5c9a-4d17-a5e0-337b3d820964] Running
E0816 07:07:40.157386    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00373003s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-204000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-204000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-204000 -n old-k8s-version-204000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 2 (162.331599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-204000 -n old-k8s-version-204000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-204000 -n old-k8s-version-204000: exit status 2 (162.507587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-204000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-204000 -n old-k8s-version-204000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-204000 -n old-k8s-version-204000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:07:53.099723    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:08:03.458170    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:08:11.248401    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:08:28.149278    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:08:32.191038    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0: (49.713355943s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75f299cb-fea4-4fa8-9a34-88b3dd846c7e] Pending
helpers_test.go:344: "busybox" [75f299cb-fea4-4fa8-9a34-88b3dd846c7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75f299cb-fea4-4fa8-9a34-88b3dd846c7e] Running
E0816 07:08:49.060532    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/custom-flannel-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003832886s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-884000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-884000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-884000 --alsologtostderr -v=3: (8.425218815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 7 (69.161776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-884000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:08:59.899843    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kubenet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.058763    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.066474    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.078280    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.100544    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.143764    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.225267    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.388285    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:56.710370    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:57.352685    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:09:58.635099    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:01.197782    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:04.666911    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/skaffold-975000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:06.319899    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:16.561630    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.254522    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.261442    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.273901    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.295785    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.337503    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.420839    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.582964    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:30.904996    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:31.548425    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:32.832077    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:35.394817    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:37.045451    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:37.511143    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/enable-default-cni-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:40.517425    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:10:50.759195    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:11:11.241285    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:11:18.008535    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:11:25.087160    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-884000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.0: (4m52.122215364s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4p4px" [6d769287-c0ec-4e93-8474-be760fa22587] Running
E0816 07:11:52.203421    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002640555s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4p4px" [6d769287-c0ec-4e93-8474-be760fa22587] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004568831s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-388000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-388000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-388000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-388000 -n embed-certs-388000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-388000 -n embed-certs-388000: exit status 2 (165.992282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-388000 -n embed-certs-388000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-388000 -n embed-certs-388000: exit status 2 (166.790903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-388000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-388000 -n embed-certs-388000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-388000 -n embed-certs-388000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:12:12.450642    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/bridge-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:12:22.111046    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/kindnet-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:12:39.931832    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/no-preload-794000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:12:48.158504    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/auto-199000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0: (41.347147921s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-343000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-343000 --alsologtostderr -v=3
E0816 07:12:53.104454    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/functional-525000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-343000 --alsologtostderr -v=3: (8.421062023s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000: exit status 7 (68.95194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-343000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0
E0816 07:13:03.461056    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/calico-199000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:13:14.127591    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/old-k8s-version-204000/client.crt: no such file or directory" logger="UnhandledError"
E0816 07:13:28.154946    1554 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1009/.minikube/profiles/addons-040000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0: (29.392131057s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-343000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-343000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000: exit status 2 (163.756413ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000: exit status 2 (162.054126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-343000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mmql7" [b24e2853-9271-42c3-a3e7-dcb8d0d562e1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005464229s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mmql7" [b24e2853-9271-42c3-a3e7-dcb8d0d562e1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005869888s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-884000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-884000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-884000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 2 (161.201511ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000: exit status 2 (161.73953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-884000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-884000 -n default-k8s-diff-port-884000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.91s)

                                                
                                    

Test skip (20/322)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-199000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-199000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-199000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-199000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-199000"

                                                
                                                
----------------------- debugLogs end: cilium-199000 [took: 5.513590112s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-199000
--- SKIP: TestNetworkPlugins/group/cilium (5.73s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-827000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-827000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard